1991
DOI: 10.1016/0893-6080(91)90009-t
|View full text |Cite
|
Sign up to set email alerts
|

Approximation capabilities of multilayer feedforward networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

13
2,937
1
49

Year Published

1996
1996
2018
2018

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 5,215 publications
(3,000 citation statements)
references
References 4 publications
13
2,937
1
49
Order By: Relevance
“…A MLP with enough units in a single hidden layer can approximate any function, provided the activation function of the neurons satisfies some general constraints [31,32]. From these considerations, we decided to use a MLP with one hidden layer, for which the optimum number of hidden neurons was experimentally determined.…”
Section: Multilayer Perceptronmentioning
confidence: 99%
See 1 more Smart Citation
“…A MLP with enough units in a single hidden layer can approximate any function, provided the activation function of the neurons satisfies some general constraints [31,32]. From these considerations, we decided to use a MLP with one hidden layer, for which the optimum number of hidden neurons was experimentally determined.…”
Section: Multilayer Perceptronmentioning
confidence: 99%
“…As neuron activation function in the hidden layer, we chose the hyperbolic tangent sigmoid function (tan-sigmoid), an antisymmetric function in the interval (-1, 1). Tan-sigmoid satisfies the constraints in [31] and [32]. Moreover, it improves the learning speed of MLP [29].…”
Section: Multilayer Perceptronmentioning
confidence: 99%
“…In many cases the unknown function to be approximated is not present in H, but a combination of hypotheses drawn from H can expand the space of representable functions, embracing also the true one. Although many learning algorithms present universal approximation properties [55,100], with finite data sets these asymptotic features do not hold: the effective space of hypotheses explored by the learning algorithm is a function of the available data and it can be significantly smaller than the virtual H considered in the asymptotic case. From this standpoint ensembles can enlarge the effective hypotheses coverage, expanding the space of representable functions.…”
Section: Reasons For Combining Multiple Learnersmentioning
confidence: 99%
“…Properties of this neural network model have been studied quite well. By choosing various activation functions, many authors proved that SLFNs with the chosen activation function possess the universal approximation property (see, e.g., [3,4,6,7,8,10,11,14,29]). That is, for any compact set Q ⊂ R d , the class of functions (1.1) is dense in C(Q), the space of continuous functions on Q.…”
Section: Introductionmentioning
confidence: 99%