摘要

A novel training algorithm was proposed to improve the learning rate and stability of the multi-layer feedforward neural networks. The generalized objective function was constructed by adding an auxiliary constraint term to the sum of the squared errors in the algorithm. The weight matrix of output layer was trained using the generalized objective function. The recursive equations for training the weight matrix of output layer were derived using Newton iterative algorithm without any simplification. The auxiliary constraint term involves the requirement for the smoothness of output which could improve the stability of the algorithm. The high-order derivative information of the neuron action function was used during the training procedure, so the algorithm had high convergence speed. In the end, the algorithm was used to learn training pattern of different nonlinear function. Simulation results show that the convergent rate and accuracy of the algorithm are better than those of the Karayiannis's second-order learning algorithm.

全文