作者: V. Kreinovich , O. Sirisaengtaksin , S. Cabrera
关键词:
摘要: Neural networks are universal approximators. For example, it has been proved (K. Hornik et al., 1989) that for every /spl epsiv/>0 an arbitrary continuous function on a compact set can be epsiv/-approximated by 3-layer neural network. This and other results prove in principle, any (e.g., control) implemented appropriate But why networks? In addition to networks, also approximated polynomials, etc. What is so special about make them preferable approximators? To compare different approximators, one the number of bits we must store order able reconstruct with given precision epsiv/. weights thresholds. coefficients, We consider functions variable, show some neurons (corresponding wavelets), optimal approximators sense they require (asymptotically) smallest possible bits. >