2019
DOI: 10.48550/arxiv.1910.01545
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On Universal Approximation by Neural Networks with Uniform Guarantees on Approximation of Infinite Dimensional Maps

Abstract: The study of universal approximation of arbitrary functions f : X → Y by neural networks has a rich and thorough history dating back to Kolmogorov (1957). In the case of learning finite dimensional maps, many authors have shown various forms of the universality of both fixed depth and fixed width neural networks. However, in many cases, these classical results fail to extend to the recent use of approximations of neural networks with infinitely many units for functional data analysis, dynamical systems identif… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 8 publications
0
4
0
Order By: Relevance
“…We provide proof of Theorem 13 in the appendix. We note that S ⊂ R d and T ⊂ R d ′ in Theorem 13 are allowed to be non-compact unlike the result in Guss and Salakhutdinov (2019). Combining Theorem 9 with Theorem 13, we obtain the following theorem.…”
Section: Universal Approximation Theorem In Infinite Dimensionmentioning
confidence: 90%
See 2 more Smart Citations
“…We provide proof of Theorem 13 in the appendix. We note that S ⊂ R d and T ⊂ R d ′ in Theorem 13 are allowed to be non-compact unlike the result in Guss and Salakhutdinov (2019). Combining Theorem 9 with Theorem 13, we obtain the following theorem.…”
Section: Universal Approximation Theorem In Infinite Dimensionmentioning
confidence: 90%
“…After his work, some researchers showed similar results to generalize the sigmoidal function to a larger class of activation functions as Barron (1994), Hornik et al (1989), Funahashi (1989), Kůrková (1992) and Sonoda and Murata (2017). These results were approximations to functional representations between finite-dimensional vector spaces, but recently Guss and Salakhutdinov (2019) generalized them to continuous maps between infinite-dimensional function spaces in Guss and Salakhutdinov (2019).…”
Section: Related Workmentioning
confidence: 92%
See 1 more Smart Citation
“…Neural operator is an emerging deep learning technique that is different from a typical neural network in several ways. While neural networks are able to approximate any function, which is a map between finite-dimensional spaces Cybenko (1989), to approximate an operator, which is a map between infinite dimensional spaces, a network of infinite length is required Guss & Salakhutdinov (2019). Thus, neural networks fail to accurately approximate solution operators of PDEs, which are mappings between infinite dimensional spaces.…”
Section: Introductionmentioning
confidence: 99%