2011
DOI: 10.1016/j.neunet.2010.10.002
|View full text |Cite
|
Sign up to set email alerts
|

Some comparisons of complexity in dictionary-based and linear computational models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
14
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
6
3

Relationship

4
5

Authors

Journals

citations
Cited by 22 publications
(14 citation statements)
references
References 25 publications
0
14
0
Order By: Relevance
“…the error is proportional with n b d / , where n is the number of neurons, d is the dimensionality of the data and 1  b is a constant depending on the class of the activation functions [8,16]. We note that in particular cases, the approximation error bounds are smaller and do not grow exponentially with the dimensionality of the data [17], however these special cases do not apply very often.…”
Section: Related Workmentioning
confidence: 94%
“…the error is proportional with n b d / , where n is the number of neurons, d is the dimensionality of the data and 1  b is a constant depending on the class of the activation functions [8,16]. We note that in particular cases, the approximation error bounds are smaller and do not grow exponentially with the dimensionality of the data [17], however these special cases do not apply very often.…”
Section: Related Workmentioning
confidence: 94%
“…The approximation theory results typically imply a relatively high minimal number of neurons for the worst case scenario approximation [31][32][33][34][35]. Some of these results show that the number of required neurons in many cases grows exponentially or polinomially with the dimensionality of the data [18,32,[35][36][37]. However if the parameters of the basis functions represented by the neurons are allowed to vary, for certain classes of approximated functions and appropriate basis functions, the required number of neurons may not be required to grow so quickly or at all with the dimensionality of the data [36][37][38].…”
Section: B Supervised Learning Of Function Approximation With Singlementioning
confidence: 99%
“…In general, key problems of nonlinear function approximation with neural networks remain that the reliable approximation of functions defined on high dimensional inputs often requires very large data volumes that grow exponentially with the dimensionality of the data [18,36,37,43], and finding simple models of the data with good generalization ability, in the form of neurons networks is very difficult. Often the volume of the available data is very small if the dimensionality of the data is taken into account, i.e.…”
Section: B Supervised Learning Of Function Approximation With Singlementioning
confidence: 99%
“…Thus they are sometimes called variable-basis approximation schemas in contrast to traditional linear approximators which are called fixed-basis approximation schemas. In some cases, especially in the case of functions of large numbers of variables, neural networks achieve better approximation rates than linear models with much smaller model complexity [9,10]. Motivated by the experimental exploration [8,7] of surrogate models of solutions of Fredholm equations by neural networks, Gnecco et al [11] initiated a theoretical analysis of this modelling.…”
Section: Introductionmentioning
confidence: 99%