2021
DOI: 10.1007/s00365-021-09548-z
|View full text |Cite
|
Sign up to set email alerts
|

Nonlinear Approximation and (Deep) $$\mathrm {ReLU}$$ Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
69
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 127 publications
(69 citation statements)
references
References 22 publications
0
69
0
Order By: Relevance
“…In the second step, we construct a ReLU neural network with desired size N and that realizes φ exactly. This construction is based on results from [12] for representing free-knot linear splines and sums of functions using neural networks. The improvement over [11] results from this second step.…”
Section: The Achievability Part: Proof Of Theoremmentioning
confidence: 99%
See 2 more Smart Citations
“…In the second step, we construct a ReLU neural network with desired size N and that realizes φ exactly. This construction is based on results from [12] for representing free-knot linear splines and sums of functions using neural networks. The improvement over [11] results from this second step.…”
Section: The Achievability Part: Proof Of Theoremmentioning
confidence: 99%
“…In other regimes, our construction uses at most the same number of neurons as that of [11]. The key ingredient that we used to reduce the number of neurons is an efficient construction of the neural networks that compute piecewise affine functions studied in [12]. Instead of representing a piecewise affine function by a straightforward neural net of depth 2 and size linear in the number of affine pieces, [12] constructs a deeper neural network computing the same function.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…A classical result along those lines is the universal approximation theorem [7,13], which states that singlehidden-layer neural networks with sigmoidal activation function can approximate continuous functions on compact subsets of R n arbitrarily well. More recent developments in this area are concerned with the influence of network depth on attainable approximation quality [8,10,23]. A theory establishing the fundamental limits of deep neural network expressivity is provided in [6,9].…”
Section: Introductionmentioning
confidence: 99%
“…A classical result along those lines is the universal approximation theorem [7], [13], which states that single-hidden-layer neural networks with sigmoidal activation function can approximate continuous functions on compact subsets of R n arbitrarily well. More recent developments in this area are concerned with the influence of network depth on attainable approximation quality [23], [8], [10]. A theory establishing the fundamental limits of deep neural network expressivity is provided in [6], [9].…”
Section: Introductionmentioning
confidence: 99%