1993
DOI: 10.1016/s0893-6080(05)80131-5
|View full text |Cite
|
Sign up to set email alerts
|

Multilayer feedforward networks with a nonpolynomial activation function can approximate any function

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

8
946
2
6

Year Published

1997
1997
2018
2018

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 1,806 publications
(1,018 citation statements)
references
References 13 publications
8
946
2
6
Order By: Relevance
“…Such representation closely corresponds to the Kolmogorov function superposition theorem [55]. Basing on this relation it was shown [57,58] that the MLP can approximate any continuous function of its inputs, to the extent that depends on the number of the hidden neurons. However, in the practical problem we are faced, the desired function is not known and only the limited number of experimental points is available instead.…”
Section: Multi-layer Perceptronmentioning
confidence: 68%
“…Such representation closely corresponds to the Kolmogorov function superposition theorem [55]. Basing on this relation it was shown [57,58] that the MLP can approximate any continuous function of its inputs, to the extent that depends on the number of the hidden neurons. However, in the practical problem we are faced, the desired function is not known and only the limited number of experimental points is available instead.…”
Section: Multi-layer Perceptronmentioning
confidence: 68%
“…The feed-forward neural network, by far the most widely used in practice, is essentially a highly non-linear function representation formed by repeatedly combining a fixed non-linear transfer function. It has been shown in [113] that any real-valued continuous function over R d can be arbitrarily well approximated by a feed-forward neural network with a single hidden layer, as long as the transfer function is not a polynomial. Given the potential power of neural networks in representing arbitrary continuous functions, it would seem that they could easily lead to overfitting and not work effectively in the context of Boosting.…”
Section: On the Choice Of Weak Learners For Boostingmentioning
confidence: 99%
“…The most common activation functions ' i (! ; b; x) in the hidden units are sigmoidal for Multi-layer Perceptrons (MLP) and radially symmetric for Radial Basis Function Networks (RBFN), although many other functions may be used [19,24]. Output activation functions ' 0 (u) are usually sigmoidal or linear.…”
Section: Feed-forward Neural Network (Fnn)mentioning
confidence: 99%