摘要

This paper delivers a study on the change of rank of input matrix in Extreme Learning Machine (ELM) and the relationship between the rank of input matrix and the residence error of training an ELM. From the viewpoint of data analysis, the study reveals why ELM has a decreasing residence error with the increase of number of nodes in hidden layer and what role the Sigmoid function plays in increasing the rank of input matrix. Furthermore the relationship between the stability of solutions and the rank of output matrix is also discussed. An application of residence error to genetic algorithms of minimizing L-1-norm ELM is given.