2020
DOI: 10.1016/j.jcp.2020.109409
|View full text |Cite
|
Sign up to set email alerts
|

Weak adversarial networks for high-dimensional partial differential equations

Abstract: Solving general high-dimensional partial differential equations (PDE) is a long-standing challenge in numerical mathematics. In this paper, we propose a novel approach to solve high-dimensional linear and nonlinear PDEs defined on arbitrary domains by leveraging their weak formulations. We convert the problem of finding the weak solution of PDEs into an operator norm minimization problem induced from the weak formulation. The weak solution and the test function in the weak formulation are then parameterized as… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
171
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 282 publications
(187 citation statements)
references
References 30 publications
0
171
0
Order By: Relevance
“…In the past decade, deep learning has achieved great success in many subjects, like computer vision, speech recognition, and natural language processing [7,11,18] due to the strong representability of deep neural networks (DNNs). Meanwhile, DNNs have also been used to solve partial differential equations (PDEs); see for example [1,4,5,8,19,22,24,26,28]. In classical numerical methods such as finite difference method [20] and finite element method [2], the number of degrees of freedoms (dofs) grows exponentially fast as the dimension of PDE increases.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In the past decade, deep learning has achieved great success in many subjects, like computer vision, speech recognition, and natural language processing [7,11,18] due to the strong representability of deep neural networks (DNNs). Meanwhile, DNNs have also been used to solve partial differential equations (PDEs); see for example [1,4,5,8,19,22,24,26,28]. In classical numerical methods such as finite difference method [20] and finite element method [2], the number of degrees of freedoms (dofs) grows exponentially fast as the dimension of PDE increases.…”
Section: Introductionmentioning
confidence: 99%
“…Afterwards, Monte-Carlo method is used to approximate the loss (objective) function which is defined over a high-dimensional space. Some methods are based on the PDE itself [24,26] and some other methods are based on the variational or the weak formulation [5,16,21,28]. Another successful example is the multilevel Picard approximation which is provable to overcome the curse of dimensionality for a class of semilinear parabolic equations [13].…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, neural networks have shown great success in representing highdimensional classifiers or probability distributions in a variety of machine learning tasks and have led to the tremendous success and popularity of deep learning [32,38]. Motivated by those recent success, researchers have been actively exploring using deep learning techniques to solve high dimensional PDEs [10,18,22,23,29,37,45,48] by using neural networks to parameterize the unknown solution of high dimensional PDEs. Thanks to the flexibility of the neural network approximations, such methods have achieved remarkable results for various kind of PDE problems, including eigenvalue problems for many-body quantum systems (see e.g., [7,9,12,19,[24][25][26]36]), where the high dimensional wave functions are parameterized by neural networks with specific architecture design to address the symmetry properties of many-body quantum systems.…”
Section: Introductionmentioning
confidence: 99%
“…[21,30] provides multi-scale deep neural network methods which separate different frequencies of the loss function and approximate them by the corresponding neural networks. [37] considers variational problems, and the loss function is defined as a weak formulation. To deal with the essential boundary conditions, [20] resorts to Nitsche's variational formulation.…”
Section: Introductionmentioning
confidence: 99%
“…Addressing the issue of deep learning methods, [29] proposes a Monte Carlo method to approximate second order derivatives. In [32,37], the variational form reduces the order of derivatives through the integration by parts. [11] proposes a derivative free method for parabolic PDEs by solving the equivalent BSDE problem.…”
Section: Introductionmentioning
confidence: 99%