摘要

This paper is about nonstationary nonlinear discrete-time deterministic and stochastic control systems with Borel state and control spaces, with either bounded or unbounded costs. The control problem is to minimize an infinite-horizon total cost performance index. Using dynamic programming arguments we show that, under suitable assumptions, the optimal cost functions satisfy optimality equations, which in turn give a procedure to find optimal control policies. We also prove the convergence of value iteration (or successive approximations) functions. Several examples illustrate our results under different sets of assumptions.