摘要

In view of solving nonsmooth and nonconvex problems involving complex constraints (like standard NLP problems), we study general maximization-minimization procedures produced by families of strongly convex subproblems. Using techniques from semi-algebraic geometry and variational analysis-in particular Lojasiewicz inequality-we establish the convergence of sequences generated by these types of schemes to critical points. The broad applicability of this process is illustrated in the context of NLP. In that case, critical points coincide with KKT points. When the data are semi-algebraic or real analytic our method applies (for instance) to the study of various sequential quadratic programming (SQP) schemes: the moving balls method, the penalized SQP method and the extended SQP method. Under standard qualification conditions, this provides-to the best of our knowledge-the first general convergence results for general nonlinear programming problems. We emphasize the fact that, unlike most works on this subject, no second-order conditions and/or convexity assumptions whatsoever are made. Rate of convergence are shown to be of the same form as those commonly encountered with first-order methods.

  • 出版日期2016-5