2019
DOI: 10.1016/j.jcp.2019.05.024
|View full text |Cite
|
Sign up to set email alerts
|

Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data

Abstract: Surrogate modeling and uncertainty quantification tasks for PDE systems are most often considered as supervised learning problems where input and output data pairs are used for training. The construction of such emulators is by definition a small data problem which poses challenges to deep learning approaches that have been developed to operate in the big data regime. Even in cases where such models have been shown to have good predictive capability in high dimensions, they fail to address constraints in the d… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

3
495
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 814 publications
(498 citation statements)
references
References 86 publications
3
495
0
Order By: Relevance
“…We also provided sufficient conditions for consistency, stability and convergence of functional approximation schemes to compute the solution of FDEs, thus extending the well-known Lax-Richtmyer theorem from PDEs to FDEs. As we suggested in [69], these results open the possibility to utilize techniques for highdimensional model representation such as deep neural networks [52,53,79] and numerical tensor methods [17,3,55,7,59,37] to represent nonlinear functionals and compute approximate solutions to functional differential equations. We conclude by emphasizing that the results we obtained in this paper can be extended to real-or complex-valued functionals in compact Banach spaces (see, e.g., [33,65]).…”
Section: Discussionmentioning
confidence: 83%
“…We also provided sufficient conditions for consistency, stability and convergence of functional approximation schemes to compute the solution of FDEs, thus extending the well-known Lax-Richtmyer theorem from PDEs to FDEs. As we suggested in [69], these results open the possibility to utilize techniques for highdimensional model representation such as deep neural networks [52,53,79] and numerical tensor methods [17,3,55,7,59,37] to represent nonlinear functionals and compute approximate solutions to functional differential equations. We conclude by emphasizing that the results we obtained in this paper can be extended to real-or complex-valued functionals in compact Banach spaces (see, e.g., [33,65]).…”
Section: Discussionmentioning
confidence: 83%
“…The two factors together make the commonly used surrogate methods, such as Gaussian processes (Rasmussen & Williams, ) and polynomial chaos expansion (Xiu & Karniadakis, ), difficult to work. Deep neural networks have already exhibited a promising and impressive performance for surrogate modeling of forward models with high‐dimensional input and output fields (Kani & Elsheikh, ; Mo, Zabaras, et al, ; Mo, Zhu, et al ; Sun, ; Tripathy & Bilionis, ; Zhong et al, ; Zhu & Zabaras, ; Zhu et al, ). For example, in Tripathy and Bilionis () a deep neural network was proposed to build a surrogate model for a single‐phase flow forward model.…”
Section: Introductionmentioning
confidence: 99%
“…In Sun () and Zhong et al (), their surrogate methods for a single‐phase flow forward model and a multiphase flow forward model, respectively, were based on an adversarial network framework. In our previous studies (Mo, Zabaras, et al, ; Mo, Zhu, et al, ; Zhu & Zabaras, ; Zhu et al, ), a deep dense convolutional network (DDCN), which is based on a dense connection structure (Huang et al, ) for better information flow efficiency, was employed as the surrogate modeling framework. It showed a good performance in efficiently obtaining accurate surrogates of various forward models with high‐dimensional input‐output mappings.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Since a number of authors have begun to consider the use of machine/deep learning for problems in traditional computational physics, see e.g. [1,2,3,4,5,6,7,8,9,10,11,12], we are motivated to consider methodologies that constrain the interpolatory results of a network to be contained within a physically admissible region. Quite recently, [13] proposed adding physical constraints to generative adversarial networks (GANs) also considering projection as we do, while stressing the interplay between scientific computing and machine learning; we refer the interested reader to their work for even more motivation for such approaches.…”
Section: Introductionmentioning
confidence: 99%