2019 IEEE/ACM Third Workshop on Deep Learning on Supercomputers (DLS) 2019
DOI: 10.1109/dls49591.2019.00006
|View full text |Cite
|
Sign up to set email alerts
|

Highly-scalable, Physics-Informed GANs for Learning Solutions of Stochastic PDEs

Abstract: Uncertainty quantification for forward and inverse problems is a central challenge across physical and biomedical disciplines. We address this challenge for the problem of modeling subsurface flow at the Hanford Site by combining stochastic computational models with observational data using physics-informed GAN models. The geographic extent, spatial heterogeneity, and multiple correlation length scales of the Hanford Site require training a computationally intensive GAN model to thousands of dimensions. We dev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 31 publications
(19 citation statements)
references
References 31 publications
0
19
0
Order By: Relevance
“…In contrast, in the PINN method, both spatial derivatives and the gradients with respect to DNN parameters are computed via AD, and the methodology does not require solving the PDE problem or formulating an adjoint problem. Finally, significant gains can be achieved in the PINN method performance by employing graphics processing unit (GPU) accelerators for training DNNs (Yang et al, ). GPUs are efficient for DNN training because they have many more resources and faster memory bandwidth, and DNN computations (mostly, matrices multiplication) are very fast on the GPUs.…”
Section: Parameter Estimation In a Linear Diffusion Equationmentioning
confidence: 99%
See 1 more Smart Citation
“…In contrast, in the PINN method, both spatial derivatives and the gradients with respect to DNN parameters are computed via AD, and the methodology does not require solving the PDE problem or formulating an adjoint problem. Finally, significant gains can be achieved in the PINN method performance by employing graphics processing unit (GPU) accelerators for training DNNs (Yang et al, ). GPUs are efficient for DNN training because they have many more resources and faster memory bandwidth, and DNN computations (mostly, matrices multiplication) are very fast on the GPUs.…”
Section: Parameter Estimation In a Linear Diffusion Equationmentioning
confidence: 99%
“…In the last 20 years, data‐driven discovery, including machine learning (ML), has emerged as the forth paradigm of science (in addition to experimental science, model‐based science, and computational science) and has become a popular tool in hydrology. For example, deep neural networks (DNNs) have been used for flood and wind forecasting (Dalto et al, ; Liu et al, ), predicting fracture evolution in brittle materials (Schwarzer et al, ), modeling groundwater levels (Daliakopoulos et al, ), and uncertainty quantification in subsurface flow models (Mo et al, ; Yang & Perdikaris, ; Yang et al, ; Zhu et al, ; Zhu & Zabaras, ).…”
Section: Introductionmentioning
confidence: 99%
“…Explainable AI-based models developed in Focus Area-3 glean complex data (e.g., from multiple sensors, heterogeneous data sources sampled at different frequencies, metadata) to verify the quality of collected data. Physics-informed generative adversarial networks [e.g., [2,3,9,12]] provide reassurance on data fidelity. For instance, a deep and more generalized understanding of data can be achieved through deep Taylor decomposition, SHapley Additive exPlanations (SHAP), and local interpretable model agnostic explanations [2,3].…”
Section: Earth System Models a Key Question To Address Is "How Can Physics-informed Ai Dynamically Guide The Real-time Collection And Intmentioning
confidence: 99%
“…Khoo et al [23] extend efforts for solving parametric PDEs. In additiona to these mathematical developments, recent work such as Botelho et al [3], and Yang et al [50] enable the scalable training of models used for solving PDEs. Specifically, Yang et al [50] demonstrates the scalability of the framework to 27,500 GPUs.…”
Section: Introductionmentioning
confidence: 99%
“…In additiona to these mathematical developments, recent work such as Botelho et al [3], and Yang et al [50] enable the scalable training of models used for solving PDEs. Specifically, Yang et al [50] demonstrates the scalability of the framework to 27,500 GPUs. However, the application of these methods in 3-dimensional spatial domains is computationally expensive.…”
Section: Introductionmentioning
confidence: 99%