2018
DOI: 10.1002/2017wr022148
|View full text |Cite
|
Sign up to set email alerts
|

Training‐Image Based Geostatistical Inversion Using a Spatial Generative Adversarial Neural Network

Abstract: Probabilistic inversion within a multiple‐point statistics framework is often computationally prohibitive for high‐dimensional problems. To partly address this, we introduce and evaluate a new training‐image based inversion approach for complex geologic media. Our approach relies on a deep neural network of the generative adversarial network (GAN) type. After training using a training image (TI), our proposed spatial GAN (SGAN) can quickly generate 2‐D and 3‐D unconditional realizations. A key characteristic o… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
257
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 325 publications
(285 citation statements)
references
References 42 publications
1
257
0
Order By: Relevance
“…1 Introduction and Scope Laloy et al (2018) recently proposed to use generative adversarial networks (GANs), a game changer data generation algorithm (e.g., Goodfellow et al, 2014Goodfellow et al, , 2016, to define a low-dimensional parameterization encoding complex geologic prior models, thereby allowing efficient and accurate geostatistical inversion with Markov chain Monte Carlo (MCMC) methods (Laloy et al, 2018). GANs have permitted impressive advancements for a wide range of applications such as image and texture synthesis, image-to-image translation and super-resolution (Creswell et al, 2017).…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…1 Introduction and Scope Laloy et al (2018) recently proposed to use generative adversarial networks (GANs), a game changer data generation algorithm (e.g., Goodfellow et al, 2014Goodfellow et al, , 2016, to define a low-dimensional parameterization encoding complex geologic prior models, thereby allowing efficient and accurate geostatistical inversion with Markov chain Monte Carlo (MCMC) methods (Laloy et al, 2018). GANs have permitted impressive advancements for a wide range of applications such as image and texture synthesis, image-to-image translation and super-resolution (Creswell et al, 2017).…”
mentioning
confidence: 99%
“…Laloy et al (2018) have shown that inversions based on such parameterizations work well for global probabilistic inference of complex binary 2D and 3D prior subsurface models. However, exploring the GANderived latent space with state-of-the-art MCMC sampling (Vrugt et al, 2009;Laloy and Vrugt, 2012) still necessitates tens of thousands (or more) forward evaluations (Laloy et al, 2018). Such a computational expense can be prohibitive when using computationallydemanding forward solvers encountered in the geosciences.…”
mentioning
confidence: 99%
“…The original adversarial autoencoder framework is composed of fully connected layers (Makhzani et al, 2016), making it increasingly difficult to train as the network gets deeper due to a large number of trainable parameters. To resolve this issue, we develop a CAAE framework based on convolutional layers to leverage their sparse-connectivity and parameter-sharing properties as well as robust capability in image-like data processing Laloy et al, 2018;Shen, 2018). GAN (Goodfellow et al, 2014) is a framework that establishes an adversarial game between two networks: a generative network (·) (i.e., generator) that learns the distribution p data (x) over the data, and a discriminative network (·) (i.e., discriminator) that computes the probability that a sample x is sampled from p data (x), rather than generated by the generator.…”
Section: Caae For Parameterization Of Non-gaussian Random Fieldsmentioning
confidence: 99%
“…We generate 10,000 train samples and 1,000 test samples of this input field over a uniform grid of 32 × 32 cells/pixels. The third input field dataset considered is channelized field [74], defined with binary values 0.01 and 1.0. This dataset is taken from this link https://github.com/cicsnd/pde-surrogate and these samples are obtained by cropping 32 × 32 patches from 1250 × 1250 image.…”
Section: Stochastic Boundary Value Problem In 2dmentioning
confidence: 99%