摘要

This paper presents a neurodynamic approach with a recurrent neural network for solving convex optimization problems with general constraint. It is proved that for any initial point, the state of the proposed neural network reaches the constraint set in finite time, and converges to an optimal solution of the convex optimization problem finally. In contrast to the existing related neural networks, the convergence rate of the state of the proposed neural network can be calculated quantitatively via the Lojasiewicz exponent under some mild assumptions. As applications, we estimate explicitly some Lojasiewicz exponents to show the convergence rate of the state of the proposed neural network for solving convex quadratic optimization problems. And some numerical examples are given to demonstrate the effectiveness of the proposed neural network.