摘要

A general deep learning (DL) mechanism for a multiple hidden layer feed-fiwvvard neural network contains two parts, i.e., 1) an unsupervised greedy layer-wise training and 2) a supervised fine-tuning which is usually an iterative process. Although this mechanism has been demonstrated in many fields to be able to significantly improve the generalization of neural network, there is no clear evidence to show which one of the two parts plays the essential role for the generalization improvement, resulting in an argument within the DL community. Focusing on this argument, this paper proposes a new DL approach to train multilayer feed-forward neural networks. This approach uses restricted Boltzmann machine (RBM) as the layer-wise training and uses the generalized inverse of a matrix as the supervised fine-tuning. Different from the general deep training mechanism like back-propagation (BP), the proposed approach does not need to iteratively tune the weights, and therefore, has many advantages such as quick training, better generalization, and high understandability, etc. Experimentally, the proposed approach demonstrates an excellent performance in comparison with BP-based DL and the traditional training method for multilayer random weight neural networks. To a great extent, this paper demonstrates that the supervised part plays a more important role than the unsupervised part in DL, which provides some new viewpoints for exploring the essence of DL.