摘要

The extreme learning machine (ELM), which was originally proposed for "generalized" single-hidden layer feed-forward neural networks, provides efficient unified learning solutions for the applications of regression and classification. Although, it provides promising performance and robustness and has been used for various applications, the single-layer architecture possibly lacks the effectiveness when applied for natural signals. In order to over come this shortcoming, the following work indicates a new architecture based on multilayer network framework. The significant contribution of this paper are as follows: 1) unlike existing multilayer ELM, in which hidden nodes are obtained randomly, in this paper all hidden layers with invertible functions are calculated by pulling the network output back and putting it into hidden layers. Thus, the feature learning is enriched by additional information, which results in better performance; 2) in contrast to the existing multilayer network methods, which are usually efficient for classification applications, the proposed architecture is implemented for dimension reduction and image reconstruction; and 3) unlike other iterative learning-based deep networks (DL), the hidden layers of the proposed method are obtained via four steps. Therefore, it has much better learning efficiency than DL. Experimental results on 33 datasets indicate that, in comparison to the other existing dimension reduction techniques, the proposed method performs competitively better with fast training speeds.