摘要

Nowadays, despite the popularity of deep convolutional neural networks (CNNs), the efficient training of network models remains challenging due to several problems. In this paper, we present a layer-wise learning based stochastic gradient descent method (LLb-SGD) for gradient-based optimization of objective functions in deep learning, which is simple and computationally efficient. By simulating the cross-media propagation mechanism of light in the natural environment, we set an adaptive learning rate for each layer of neural networks. In order to find the proper local optimum quickly, the dynamic learning sequence spanning different layers adaptively adjust the descending speed of objective function in multi-scale and multi-dimensional environment. To the best of our knowledge, this is the first attempt to introduce an adaptive layer-wise learning schedule with a certain degree of convergence guarantee. Due to its generality and robustness, the method is insensitive to hyper-parameters and therefore can be applied to various network architectures and datasets. Finally, we show promising results compared to other optimization methods on two image classification benchmarks using five standard networks.