摘要

Extreme Learning Machine (ELM) has attracted comprehensive attentions as a universal function approximator with its extremely fast learning speed and good generalization performance. Compared to other learning methods for Single Layer Feedforward Networks (SLFNs), the unique feature of the ELM is that the input parameters of hidden neurons are randomly generated rather than being iteratively tuned, and thereby dramatically reducing the computational burden. However, it has been pointed out that the randomness of the ELM parameters would result in fluctuating performance. In this paper, we systematically investigate the performance stabilization effect brought by a regularized variant of the ELM, named Regularized ELM (RELM). Furthermore, by using the PREdiction Sum of Squares (PRESS) statistics formula and a unique property of the RELM, we propose a semi-cross-validation algorithm to effectively realize a robust RELM-based model selection for SLFNs, termed as Automatic Regularized Extreme Learning Machine with Leave-One-Out cross-validation (AR-ELM-LOO). The simulation results show that the AR-ELM-LOO can significantly reduce the randomness performance of the ELM and it can produce nearly identical results as the full cross-validation procedure.