摘要

In this paper, we present a new nonmonotone memory gradient algorithm for unconstrained optimization problems. An attractive property of the proposed method is that the search direction always provides sufficient descent step at each iteration. This property is independent of the line search used. Under mild assumptions, the global and local convergence results of the proposed algorithm are established respectively. Numerical results are also reported to show that the proposed method is suitable to solve large-scale optimization problems and is more stable than other similar methods in practical computation.

全文