摘要

Single-hidden-layer feedforward neural network (SLFN) is an effective model for data classification and regression. However, it has a very important defect that it is rather time-consuming to explore the training algorithm of SLFN. In order to shorten the learning time, a special non-iterative learning algorithm was proposed, named as extreme learning machine (ELM). The main idea is that the input weights and bias are chosen randomly and the output weights are calculated by a pseudo-inverse matrix. However, ELM also has a very important drawback that it cannot achieve stable solution for different runs because of randomness. In this paper, we propose a stabilized learning algorithm based on iteration correction. The convergence analysis shows that the proposed algorithm can finish the learning process in fewer steps than the number of neurons. Three theorems and their proofs can prove that the proposed algorithm is stable. Several data sets are selected from UCI databases, and the experimental results demonstrate that the proposed algorithm is effective.