2023
DOI: 10.1016/j.jco.2023.101746
|View full text |Cite
|
Sign up to set email alerts
|

Lower bounds for artificial neural network approximations: A proof that shallow neural networks fail to overcome the curse of dimensionality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 42 publications
0
5
0
Order By: Relevance
“…In particular, in [75] it is proven that NNs have the expressiveness to overcome the CoD for semilinear heat PDEs. On the other hand, there are also some articles, [51,54,136,165] that derive some lower bounds on the complexity of NNs with rectified linear unit (ReLU) activation function to achieve a certain accuracy, showing that there are natural classes of functions where deep NNs with ReLU activation cannot escape the CoD.…”
Section: Approximation Results For Solution Learning In Pdesmentioning
confidence: 99%
“…In particular, in [75] it is proven that NNs have the expressiveness to overcome the CoD for semilinear heat PDEs. On the other hand, there are also some articles, [51,54,136,165] that derive some lower bounds on the complexity of NNs with rectified linear unit (ReLU) activation function to achieve a certain accuracy, showing that there are natural classes of functions where deep NNs with ReLU activation cannot escape the CoD.…”
Section: Approximation Results For Solution Learning In Pdesmentioning
confidence: 99%
“…(2020); Grohs et al. (2022) have shown that feedforward neural networks can efficiently approximate most SDE's associated PDE. Likewise, regular path‐functionals of a jump‐diffusion process with Lipschitz coefficients can efficiently be approximated by neural SDEs Gonon and Schwab (2021).…”
Section: Dynamic Case – Universal Approximation Of Causal Mapsmentioning
confidence: 99%
“…PDE from computational finance, the Hamilton-Jacobi-Bellmann (HJB) PDEs arising from stochastic optimal control problems, or the many-electron Schrödinger equation from computational chemistry. For BS PDEs and certain nonlinear HJB PDEs we were recently able to prove that neural networks are capable of representing their solutions without incurring the curse of dimensionality [9,10], and that such solutions can be numerically found by solving an empirical risk minimization (ERM) problem of a size scaling only polynomially in the problem dimension [3]. While the analysis of the computational complexity of the ERM problem remains wide open, there are some empirical results suggesting that its scaling does not suffer from the curse of dimensionality either, see [2] and Figure 3.…”
Section: Scientific Machine Learningmentioning
confidence: 99%