摘要

Random single-hidden layer feedforward neural network (RSLFN) is currently a popular learning algorithm proposed for improving traditional gradient-based model due to its fast learning speed and acceptable performance. For RSLFN, the input weights and/or other parameters are randomly initialized, and the other ones are iteratively or non-iteratively trained. However, the performance of RSLFN is sensitive to the number of hidden neurons and randomly initialized parameters. Numerous methods have been successfully employed to improve the RSLFN from various perspectives. Because of their favourable search ability, metaheuristic optimization approaches gradually attract more and more attentions. Metaheuristic algorithms usually formulate the random parameters of RSLFN into an optimization model, and then provide a near-optimum solution which could be converted into RSLFN with better generalization performance. The hybrid method for optimizing RSLFN therefore shows considerable potential in intelligent computing and artificial intelligence. However, there is no comprehensive survey on RSLFN with metaheuristic in the research area, which ultimately leads to lost opportunities for an advancement. This paper firstly introduces the basic principles of RSLFN along with several metaheuristic algorithms. Secondly, it provides a comprehensive survey of the state-of-the-art contributions in the area. Finally, current challenges are highlighted and promising research directions are also presented.