1998
DOI: 10.1109/72.655045
|View full text |Cite
|
Sign up to set email alerts
|

Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions

Abstract: It is well known that standard single-hidden layer feedforward networks (SLFNs) with at most N hidden neurons (including biases) can learn N distinct samples (x(i),t(i)) with zero error, and the weights connecting the input neurons and the hidden neurons can be chosen "almost" arbitrarily. However, these results have been obtained for the case when the activation function for the hidden neurons is the signum function. This paper rigorously proves that standard single-hidden layer feedforward networks (SLFNs) w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
53
0

Year Published

2005
2005
2019
2019

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 457 publications
(53 citation statements)
references
References 20 publications
0
53
0
Order By: Relevance
“…In our proposed ensemble, the inputs (to the ensemble) are the output decisions of four known classifiers; for miRNA prediction; and the output is the corresponding ground truth decision. Therefore, our objective is to learn/calculate the best network weights that map the decisions of the adopted classifiers into a single fused output decision [35,36]. …”
Section: Methodsmentioning
confidence: 99%
“…In our proposed ensemble, the inputs (to the ensemble) are the output decisions of four known classifiers; for miRNA prediction; and the output is the corresponding ground truth decision. Therefore, our objective is to learn/calculate the best network weights that map the decisions of the adopted classifiers into a single fused output decision [35,36]. …”
Section: Methodsmentioning
confidence: 99%
“…As introduced in [19], H is called the hidden layer output matrix of the neural network; the i-th column of H is the i-th hidden node output with respect to inputs x 1 , x 2 , ..., x N . The gradient-based algorithm can be used to train the value of β, ω 1 , ..., ωN , b 1 , ..., bN [20].…”
Section: Extreme Learning Machinementioning
confidence: 99%
“…An extension of the above theorem is proposed by Leshno et al [2] which states that a multilayer feedforward network with additive nodes and locally bounded piecewise continuous activation function can approximate any continuous function if and only if the network's activation function is non-polynomial. In addition, the universal approximation capability is generalized for randomized feedforward networks in some research works [3][4][5] and is investigated for a network trained with N distinct samples [6].…”
Section: Introductionmentioning
confidence: 99%