摘要

It is well-known that single hidden layer feed-forward neural networks (SLFNs) with at most n hidden neurons can learn n distinct samples with zero error and the weights connecting the Input neurons and the hidden neurons and the hidden node thresholds can be chosen randomly Namely for n distinct samples there exist SLFNs with n hidden neurons that interpolate them These networks are called exact interpolation networks for the samples However for some approximated target functions (as continuous or integrable functions) not all exact interpolation networks have good approximation effect This paper by using a functional approach rigorously proves that for given dist