2007
DOI: 10.1109/ijcnn.2007.4371216
|View full text |Cite
|
Sign up to set email alerts
|

Upper Bound on Pattern Storage in Feedforward Networks

Abstract: Abstract-Starting from the strict interpolation equations for multivariate polynomials, an upper bound is developed for the number of patterns that can be memorized by a nonlinear feedforward network. A straightforward proof by contradiction is presented for the upper bound. It is shown that the hidden activations do not have to be analytic. Networks, trained by conjugate gradient, are used to demonstrate the tightness of the bound for random patterns. Based upon the upper bound, small multilayer perceptron mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2008
2008
2015
2015

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…Since ELMs work for SLFNs, the number of hidden neurons is the only parameter that determines the networks architecture. According to [37], [63]- [66], the number of hidden neurons given in the networks is highly related to the learning capacity of the network. Varying criteria have been proposed to define the upper and lower bound of the number of hidden neurons.…”
Section: Building the Prediction Modelmentioning
confidence: 99%
“…Since ELMs work for SLFNs, the number of hidden neurons is the only parameter that determines the networks architecture. According to [37], [63]- [66], the number of hidden neurons given in the networks is highly related to the learning capacity of the network. Varying criteria have been proposed to define the upper and lower bound of the number of hidden neurons.…”
Section: Building the Prediction Modelmentioning
confidence: 99%
“…In general in a single-layer fully connected feed-forward neural network trained with BP, the learning capacity is governed by (Huang and Babri, 1998;Narasimha et al, 2008). This rule of thumb simply states that for a given number of training samples , the number of hidden units should be more than a lower bound and still less than an upper bound that is .…”
Section: Accepted Manuscriptmentioning
confidence: 99%