摘要

This paper presents a computational method to deal with the Hamilton-Jacobi-Bellman equation with respect to a nonlinear optimal control problem. With Bellman dynamic programming principle and the nonlinear minimization method, the feedback optimal control is obtained by means of the value function under certain smooth assumptions. The main work is to present a global minimizer flow in the Hamilton-Jacobi-Bellman equation with an iteration process for solving the corresponding difference equation.