摘要

We propose a simple time-discretization scheme for multi-dimensional stochastic optimal control problems in continuous time. It is based on a probabilistic representation for the convolution of the value function by a probability density function. We show the convergence results under mild conditions on coefficients of the problems by Barles-Souganidis viscosity solution method. Resulting numerical methods allow us to use uncontrolled Markov processes to estimate the conditional expectations in the dynamic programming procedure. Moreover, it can be implemented without the interpolation of the value function or the adjustment of the diffusion matrix.

  • 出版日期2014-11