摘要

This paper is concerned with the minimization of a function whose closed-form analytical expression is unknown, subject to well-defined and differentiable constraints. We assume that there is available data to train a multi-layer perceptron, which can be used for estimating the gradient of the objective function. We combine this estimate with the gradients of the constraints to approximate the reduced gradient, which is ultimately used for determining a feasible descent direction. We call this variant of the reduced gradient method as the Neural Reduced Gradient algorithm. We evaluate its performance on a large set of constrained convex and nonconvex test problems. We also provide an interesting and important application of the new method in the minimization of shear stress for drag reduction in the control of turbulence.

  • 出版日期2015-12