摘要

The training of recurrent neural networks (RNNs) concerns the selection of their structures and the connection weights. To efficiently enhance generalization capabilities of RNNs, a recurrent self-organizing neural networks (RSONN), using an adaptive growing and pruning algorithm (AGPA), is proposed for improving their performance in this paper. This AGPA can self-organize the structures of RNNs based on the information processing ability and competitiveness of hidden neurons in the learning process. Then, the hidden neurons of RSONN can be added or pruned to improve the generalization performance. Furthermore, an adaptive second-order algorithm with adaptive learning rate is employed to adjust the parameters of RSONN. And the convergence of RSONN is given to show the computational efficiency. To demonstrate the merits of RSONN for data modeling, several benchmark datasets and a real world application associated with nonlinear systems modeling problems are examined with comparisons against other existing methods. Experimental results show that the proposed RSONN effectively simplifies the network structure and performs better than some exiting methods.