摘要

We study the convergence properties of a class of low memory methods for solving large-scale unconstrained problems. This class of methods belongs to that of quasi-Newton family, except for which the approximation to Hessian, at each step, is updated by means of a diagonal matrix. Using appropriate scaling, we show that the methods can be implemented so as to be globally and -linearly convergent with standard inexact line searches. Preliminary numerical results suggest that the methods are good alternative to other low memory methods such as the CG and spectral gradient methods.

  • 出版日期2016-10