摘要

A type of optimized neural networks with limited precision weights (LPWNN) is presented in this paper. Such neural networks, which require less memory for storing the weights and less expensive floating point units in order to perform the computations involved, are better suited for embedded systems implementation than the real weight ones. Based on analyzing the learning capability of LPWNN, Quantize Back-propagation Step-by-Step (QBPSS) algorithm is proposed for such neural networks to overcome the effects of limited precision. Methods of designing and training LPNN are represented, including the quantization of non-linear activation function and the selection of learning rate, network architecture and weights precision. The optimized LPWNN performance has been evaluated by comparing to conventional neural networks with double-precision floating-point weights on road recognition of image for intelligent vehicle in ARM 9 embedded systems, and the results show the optimized LPWNN has 7 times faster than the conventional ones.