摘要

Nonlinear stochastic optimal control theory has played an important role in many fields. In this theory, uncertainties of dynamics have usually been represented by Brownian motion, which is Gaussian white noise. However, there are many stochastic phenomena whose probability density has a long tail, which suggests the necessity to study the effect of non-Gaussianity. This paper employs Levy processes, which cause outliers with a significantly higher probability than Brownian motion, to describe such uncertainties. In general, the optimal control law is obtained by solving the Hamilton-Jacobi-Bellman equation. This paper shows that the path-integral approach combined with the policy iteration method is efficiently applicable to solve the Hamilton-Jacobi-Bellman equation in the Levy problem setting. Finally, numerical simulations illustrate the usefulness of this method.

  • 出版日期2017-3