2021
DOI: 10.48550/arxiv.2109.11354
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Arbitrary-Depth Universal Approximation Theorems for Operator Neural Networks

Abstract: The standard Universal Approximation Theorem for operator neural networks (NNs) holds for arbitrary width and bounded depth. Here, we prove that operator NNs of bounded width and arbitrary depth are universal approximators for continuous nonlinear operators. In our main result, we prove that for non-polynomial activation functions that are continuously differentiable at a point with a nonzero derivative, one can construct an operator NN of width five, whose inputs are real numbers with finite decimal represent… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…We also introduce various enhancements to DeepONet to accelerate its training and increase its accuracy, introducing for example the POD modes in the trunk net, obtained readily from the available training datasets. In addition to computational tests, we also perform a theoretical comparison of DeepONet versus FNO, following the published work on the theory of DeepONet in [2,24,25,26], and on the more recent theory of FNO in [4,27]. On this point, it is worth noted that DeepONet was based from the onset on the theorem of Chen & Chen [2], whereas the formulation of FNO was not theoretically justified originally, and the recent theoretical work covers only invariant kernels.…”
Section: Introductionmentioning
confidence: 99%
“…We also introduce various enhancements to DeepONet to accelerate its training and increase its accuracy, introducing for example the POD modes in the trunk net, obtained readily from the available training datasets. In addition to computational tests, we also perform a theoretical comparison of DeepONet versus FNO, following the published work on the theory of DeepONet in [2,24,25,26], and on the more recent theory of FNO in [4,27]. On this point, it is worth noted that DeepONet was based from the onset on the theorem of Chen & Chen [2], whereas the formulation of FNO was not theoretically justified originally, and the recent theoretical work covers only invariant kernels.…”
Section: Introductionmentioning
confidence: 99%
“…They have also proven that the DeepONet architecture has size O(|log( )| κ ) for any κ > 0 depending on the physical space dimension. In paper [46], the authors have shown that for non-polynomial activation functions, an operator with neural network of width five is arbitrarily close to any given continuous nonlinear operator. They have also shown the theoretical advantages of depth by constructing operator ReLU neural networks of depth 2k 3 + 8 with constant width, which they compare with other operator ReLU neural network of depth k.…”
Section: Neural Operator Theorymentioning
confidence: 99%
“…Motivation: Operator learning techniques have recently attracted significant attention thanks to their effectiveness and favorable complexity in approximating maps between infinite-dimensional Banach spaces [1,2,3]. Techniques such as deep operator networks (DeepONets) [4], the family of neural operator methods [5], and operator-valued kernel methods [6,7] have demonstrated promise in building fast surrogate models for emulating complex physical processes, opening new avenues for sampling, inference and uncertainty quantification in very high-dimensional parameter spaces.…”
Section: Introductionmentioning
confidence: 99%