2017
DOI: 10.1016/j.neunet.2017.06.016
|View full text |Cite
|
Sign up to set email alerts
|

Limitations of shallow nets approximation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
23
0
1

Year Published

2017
2017
2023
2023

Publication Types

Select...
8
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 31 publications
(24 citation statements)
references
References 26 publications
0
23
0
1
Order By: Relevance
“…Limitations of the approximation capabilities of shallow nets were firstly proposed in [4] in terms of their incapability of localized approximation. Five years later, [8] described their limitations via providing lower bounds of approximation of smooth functions in the minimax sense, which was recently highlighted by [25] via showing that there exists a probabilistic measure, under which, all smooth functions cannot be approximated by shallow nets very well with high confidence. In [1], Bengio et al also pointed out the limitations of some shallow nets in terms of the so-called "curse of dimensionality".…”
Section: B Learning Rate Analysismentioning
confidence: 99%
“…Limitations of the approximation capabilities of shallow nets were firstly proposed in [4] in terms of their incapability of localized approximation. Five years later, [8] described their limitations via providing lower bounds of approximation of smooth functions in the minimax sense, which was recently highlighted by [25] via showing that there exists a probabilistic measure, under which, all smooth functions cannot be approximated by shallow nets very well with high confidence. In [1], Bengio et al also pointed out the limitations of some shallow nets in terms of the so-called "curse of dimensionality".…”
Section: B Learning Rate Analysismentioning
confidence: 99%
“…We refer the reader to [5,25] for interesting results and discussions around other limitations of such networks.…”
Section: Analysis Of the Multivariate Casementioning
confidence: 99%
“…This theorem shows a particular limitation of neural networks with one hidden layer. We refer the reader to [5,25] for interesting results and discussions around other limitations of such networks.…”
Section: Analysis Of the Multivariate Casementioning
confidence: 99%