2020
DOI: 10.1098/rspa.2019.0630
|View full text |Cite
|
Sign up to set email alerts
|

Overcoming the curse of dimensionality in the numerical approximation of semilinear parabolic partial differential equations

Abstract: For a long time it has been well-known that high-dimensional linear parabolic partial differential equations (PDEs) can be approximated by Monte Carlo methods with a computational effort which grows polynomially both in the dimension and in the reciprocal of the prescribed accuracy. In other words, linear PDEs do not suffer from the curse of dimensionality. For general semilinear PDEs with Lipschitz coefficients, however, it remained an open question whether these suffer from the curse of dimensionality. In th… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
79
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 68 publications
(79 citation statements)
references
References 100 publications
(150 reference statements)
0
79
0
Order By: Relevance
“…Beginning around 2016, the spectacular successes of machine learning systems in computer vision, natural language processing and other areas prompted renewed efforts to establish a mathematically rigorous foundation for, in particular, deep feedforward neural networks [14,22,49,61,62,75,99,118,123,128,134,142‐144,180,181]. We draw particular attention to a number of publications that rigorously establish that certain neural network architectures are theoretically able to overcome the curse of dimensionality for various linear and nonlinear PDEs, cf [8,18,65,81,82,84,85,92].…”
Section: Extensions and Related Workmentioning
confidence: 99%
“…Beginning around 2016, the spectacular successes of machine learning systems in computer vision, natural language processing and other areas prompted renewed efforts to establish a mathematically rigorous foundation for, in particular, deep feedforward neural networks [14,22,49,61,62,75,99,118,123,128,134,142‐144,180,181]. We draw particular attention to a number of publications that rigorously establish that certain neural network architectures are theoretically able to overcome the curse of dimensionality for various linear and nonlinear PDEs, cf [8,18,65,81,82,84,85,92].…”
Section: Extensions and Related Workmentioning
confidence: 99%
“…Branching diffusion approximation methods are also in the case of certain nonlinear PDEs as efficient as plain vanilla Monte Carlo approximations in the case of linear PDEs, but the error analysis only applies in the case where the time horizon T ∈ (0, ∞) and the initial condition, respectively, are sufficiently small and branching diffusion approximation methods fail to converge in the case where the time horizon T ∈ (0, ∞) exceeds a certain threshold (cf., e.g., [41,Theorem 3.12]). For MLP approximation methods it has been recently shown in [4,45,46] that such algorithms do indeed overcome the curse of dimensionality for certain classes of gradient-independent PDEs. Numerical simulations for deep learning based approximation methods for nonlinear PDEs in high dimensions are very encouraging (see, e.g., the above named references [1][2][3][6][7][8]13,14,16,17,21,24,25,31,[34][35][36]40,43,48,50,52,[54][55][56][57][58]60,62,63]) but so far there is only partial error analysis available for such algorithms (which, in turn, is strongly based on the above-mentioned error analysis for the MLP approximation method; cf.…”
Section: Introductionmentioning
confidence: 99%
“…[44] and, e.g., [9,23,32,33,36,49,51,61,62]). To sum up, to the best of our knowledge until today the MLP approximation method (see [45]) is the only approximation method in the scientific literature for which it has been shown that it does overcome the curse of dimensionality in the numerical approximation of semilinear PDEs with general time horizons.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations