摘要

A learning scheme based on Extreme Learning Machine (ELM) and Lip regularization is proposed for a double parallel feedforward neural network. ELM has been widely used as a fast learning method for feedforward networks with a single hidden layer. A key problem for ELM is the choice of the (minimum) number of the hidden nodes. To resolve this problem, we propose to combine the L-1/2 regularization method, that becomes popular in recent years in informatics, with ELM. It is shown in our experiments that the involvement of the L-1/2 regularizer in DPFNN with ELM results in less hidden nodes but equally good performance.

全文