摘要

The role of input dimension d is studied in approximating, in various norms, target sets of d-variable functions using linear combinations of adjustable computational units. Results from the literature, which emphasize the number n of terms in the linear combination, are reformulated, and in some cases improved, with particular attention to dependence on d. For worst-case error, upper bounds are given in the factorized form xi(d)kappa(n), where kappa is nonincreasing (typically kappa(n) similar to n(-1/2)). Target sets of functions are described for which the function xi is a polynomial. Some important cases are highlighted where xi decreases to zero as d -%26gt; infinity. For target functions, extent (e.g., the size of domains in where they are defined), scale (e.g., maximum norms of target functions), and smoothness (e.g., the order of square-integrable partial derivatives) may depend on, and the influence of such dimension-dependent parameters on model complexity is considered. Results are applied to approximation and solution of optimization problems by neural networks with perceptron and Gaussian radial computational units.

  • 出版日期2012-2