[Proceedings 1992] IJCNN International Joint Conference on Neural Networks
DOI: 10.1109/ijcnn.1992.227294
|View full text |Cite
|
Sign up to set email alerts
|

Extrapolation limitations of multilayer feedforward neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
32
0

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 46 publications
(38 citation statements)
references
References 1 publication
0
32
0
Order By: Relevance
“…A crucial requirement for ANN‐based predictions is that the training data form a representative subset of all the cases we wish to predict. Predictions that fall outside the ranges of input and output values defined by the training data require extrapolation and hence are unreliable [ Haley and Soloway , ]. We used the ANN implementation provided by the free MATLAB toolbox NETLAB [ Nabney , ].…”
Section: Methodsmentioning
confidence: 99%
“…A crucial requirement for ANN‐based predictions is that the training data form a representative subset of all the cases we wish to predict. Predictions that fall outside the ranges of input and output values defined by the training data require extrapolation and hence are unreliable [ Haley and Soloway , ]. We used the ANN implementation provided by the free MATLAB toolbox NETLAB [ Nabney , ].…”
Section: Methodsmentioning
confidence: 99%
“…Developers of new training algorithms and feedforward network architectures frequently test their creations by fitting a polynomial or other simple relationship [see, for example, Almeida (1987), Barton (1991), Haley and Soloway (1992) and Webb and Lowe (1988)j. However, oral tradition has alerted practitioners to training difficulties in such circumstances.…”
Section: Introductionmentioning
confidence: 99%
“…The question rises whether such a metamodel is able to predict cases that the network was not trained on. In general, neural networks do not extrapolate well: they are used to fit a function on the provided training data, but outside the subspace populated with training points that function does not necessarily represent the correct relation [13][14][15]. As such, data fed to the network during inference (employing a trained network to make new predictions) must always lie within the training subspace.…”
Section: Introductionmentioning
confidence: 99%