1997
DOI: 10.1109/72.557662
|View full text |Cite
|
Sign up to set email alerts
|

Capabilities of a four-layered feedforward neural network: four layers versus three

Abstract: Neural-network theorems state that only when there are infinitely many hidden units is a four-layered feedforward neural network equivalent to a three-layered feedforward neural network. In actual applications, however, the use of infinitely many hidden units is impractical. Therefore, studies should focus on the capabilities of a neural network with a finite number of hidden units, In this paper, a proof is given showing that a three-layered feedforward network with N-1 hidden units can give any N input-targe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
146
0
5

Year Published

1999
1999
2023
2023

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 377 publications
(151 citation statements)
references
References 8 publications
0
146
0
5
Order By: Relevance
“…The principle which distinguishes ELM from the traditional neural network methodology is that all the parameters of the feed-forward networks (input weights and hidden layer biases) are not required to be tuned in the former. The studies of Tamura and Tateishi (1997) and Huang (2003) showed in their work that the SLFNs with randomly chosen input weights efficiently learn distinct training examples with minimum error. After randomly choosing input weights and the hidden layer biases, SLFNs can be simply considered as a linear system.…”
Section: Extreme Learning Machine (Elm)mentioning
confidence: 99%
“…The principle which distinguishes ELM from the traditional neural network methodology is that all the parameters of the feed-forward networks (input weights and hidden layer biases) are not required to be tuned in the former. The studies of Tamura and Tateishi (1997) and Huang (2003) showed in their work that the SLFNs with randomly chosen input weights efficiently learn distinct training examples with minimum error. After randomly choosing input weights and the hidden layer biases, SLFNs can be simply considered as a linear system.…”
Section: Extreme Learning Machine (Elm)mentioning
confidence: 99%
“…Although one hidden layer is adequate to enable NN to approximate any given function, some researchers argued that NN with more than one hidden layer might require fewer hidden neurons to approximate the same function. It was theoretically shown in [26] that, given a desired degree of interpolation accuracy, NNs with two hidden layers require considerably fewer hidden neurons compared to NNs with one hidden layer. From a more practical perspective it has been shown in [27], through extensive experiments, that single-hidden-layer NNs are superior to networks with more than one hidden layer with the same level of complexity mainly due to the fact that the latter are more prone to fall into local minima.…”
Section: Introductionmentioning
confidence: 99%
“…Class I refers to the NN that has more input than output variables, while Class II refers to NNs with an equal number of inputs and outputs and Class III refers to the NN which has more output than input variables. For Class I NNs (more inputs than outputs), one hidden layer is enough in most cases and, according to Tamura and Tateishi (1997), if N-1 neurons are used in the hidden layer (where N is the number of inputs), the NN will give an exact prediction. This recommendation works well when the system has a small number of inputs and the correlation between the data points (inputs and outputs) is not very complex; otherwise their recommendation will not always work and we recommend the use of 8 to 20 neurons in the hidden layer for better precision and shorter training time.…”
Section: Nn Topologymentioning
confidence: 99%