2019
DOI: 10.1016/j.cageo.2019.04.006
|View full text |Cite
|
Sign up to set email alerts
|

Towards a robust parameterization for conditioning facies models using deep variational autoencoders and ensemble smoother

Abstract: History matching is a jargon used to refer to the data assimilation problem in oil and gas reservoirs. The literature about history matching is vast and despite the impressive number of methods proposed and the significant progresses reported in the last decade, conditioning reservoir models to dynamic data is still a challenging task. Ensemble-based methods are among the most successful and efficient techniques currently available for history matching. These methods are usually able to achieve reasonable data… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
76
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 129 publications
(77 citation statements)
references
References 68 publications
(78 reference statements)
1
76
0
Order By: Relevance
“…Usually the negative expected log-likelihood (e.g., the crossentropy function) is used ( [30], [31], [33]- [35]) but the mean squared error [32] can also be used. The second term L KL (equ.…”
Section: Variational Autoencodersmentioning
confidence: 99%
See 1 more Smart Citation
“…Usually the negative expected log-likelihood (e.g., the crossentropy function) is used ( [30], [31], [33]- [35]) but the mean squared error [32] can also be used. The second term L KL (equ.…”
Section: Variational Autoencodersmentioning
confidence: 99%
“…Most of the generative applications deal with image processing as in [33] where a VAE was trained to generate face images with much clearer and more natural noses, eyes, teeth, hair textures as well as reasonable backgrounds. In [30], a generative model is constructed to create new random realizations of faces that are indistinguishable from samples.…”
Section: Variational Autoencodersmentioning
confidence: 99%
“…Tools based on deep-learning have been shown to be applicable for such geological parameterizations. Specific approaches include those based on variational autoencoders (VAEs) [18,19] and generative adversarial networks (GANs) [20,21,22,23,24,25,26,27]. Algorithms based on a combination of VAE and GAN have also been devised [7].…”
Section: Introductionmentioning
confidence: 99%
“…Deep generative models (DGMs) are part of the deep models family and are a powerful way to learn any distribution of observed data through unsupervised learning. The DGMs are composed mainly by variational autoencoders (VAEs) [1][2][3][4], and generative adversarial networks (GANs) [5]. The VAEs are mainly used to extract features from the input vector in an unsupervised way while the GANs are used to generate synthetic samples through an adversarial learning by achieving an equilibrium between a Generator and a Discriminator.…”
Section: Introductionmentioning
confidence: 99%
“…The VAEs have met with great success in recent years in several applicative areas including anomaly detection [6][7][8][9], text classification [10], sentence generation [11], speech synthesis and recognition [12][13][14], spatio-temporal solar irradiance forecasting [15] and in geoscience for data assimilation [2]. In other respects, the two major application areas of the VAEs are the biomedical and healthcare recommendation [16][17][18][19], and industrial applications for nonlinear processes monitoring [1,3,4,[20][21][22][23][24][25].…”
Section: Introductionmentioning
confidence: 99%