摘要

A model which takes advantage of wavelet-like functions in the functional form of a neural network is used for function approximation. The scale parameters are mainly used, neglecting the usual translation parameters in the function expansion. Two training operations are then investigated. The first one consists of optimizing the output synaptic weights and the second one on optimizing the scale parameters hidden inside the elementary tasks. Building upon previously Published results, it is found that if (p + 1) scale parameters merge during the learning process, derivatives of order p will emerge spontaneously in the functional basis. It is also found that for those tasks which induce Such mergings, the function approximation can be improved and the training time reduced by directly implementing the elementary tasks and their derivatives in the functional basis. Attention has been also devoted to the role transfer functions, number of iterations, and formal neurons number may play during and after the learning process. The results complement previously published results on this problem.

  • 出版日期2008-11