摘要

The limited memory BFGS method (L-BFGS) is an adaptation of the BFGS method for large-scale unconstrained optimization. However, The L-BFGS method need not converge for nonconvex objective functions and it is inefficient on highly ill-conditioned problems. In this paper, we proposed a regularization strategy on the L-BFGS method, where the used regularization parameter may play a compensation role in some sense when the condition number of Hessian approximation tends to become ill-conditioned. Then we proposed a regularized L-BFGS method and established its global convergence even when the objective function is nonconvex. Numerical results show that the proposed method is efficient.