摘要

In recent years, Extreme Learning Machine (ELM) has attracted comprehensive attentions as a universal function approximator. Comparing to other single layer feedforward neural networks, its input parameters of hidden neurons can be randomly generated rather than tuned, and thereby saving a huge amount of computational power. However, it has been pointed out that the randomness of ELM parameters would result in fluctuating performances. In this paper, we intensively investigate the randomness reduction effect by using a regularized version of ELM, named Ridge ELM (RELM). Previously, RELM has been shown to achieve generally better generalization than the original ELM. Furthermore, we try to demonstrate that RELM can also greatly reduce the fluctuating performance with 12 real world regression tasks. An insight into this randomness reduction effect is also given.

  • 出版日期2013

全文