2020
DOI: 10.1029/2019wr026984
|View full text |Cite
|
Sign up to set email alerts
|

Accelerated Multilevel Monte Carlo With Kernel‐Based Smoothing and Latinized Stratification

Abstract: Heterogeneity and a paucity of measurements of key material properties undermine the veracity of quantitative predictions of subsurface flow and transport. For such model forecasts to be useful as a management tool, they must be accompanied by computationally expensive uncertainty quantification, which yields confidence intervals, probability of exceedance, and so forth. We design and implement novel multilevel Monte Carlo (MLMC) algorithms that accelerate estimation of the cumulative distribution functions (C… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

3
4

Authors

Journals

citations
Cited by 14 publications
(12 citation statements)
references
References 47 publications
0
12
0
Order By: Relevance
“…At the optimal mix of 573 LFS and 100 HFS, two of the five experiments, the CNN trained on 12 hours of these multi-fidelity data has lower RMSE than the mean RMSE of the CNN trained on 79 hours of the HFS data. This LFS/HFS ratio lies near the range, 1.5 − 5.5, suggested for multilevel Monte Carlo (Taverniers et al, 2020). For the datageneration budget of 12 hours, a mix dominated by the LFS data results in a CNN whose RMSE on test data exceeds 1.0, which indicates that the network's last Convolution Transpose 2 layer is not meaningfully trained.…”
Section: Model Performancementioning
confidence: 81%
See 3 more Smart Citations
“…At the optimal mix of 573 LFS and 100 HFS, two of the five experiments, the CNN trained on 12 hours of these multi-fidelity data has lower RMSE than the mean RMSE of the CNN trained on 79 hours of the HFS data. This LFS/HFS ratio lies near the range, 1.5 − 5.5, suggested for multilevel Monte Carlo (Taverniers et al, 2020). For the datageneration budget of 12 hours, a mix dominated by the LFS data results in a CNN whose RMSE on test data exceeds 1.0, which indicates that the network's last Convolution Transpose 2 layer is not meaningfully trained.…”
Section: Model Performancementioning
confidence: 81%
“…The relative permeability for the th phase, k r , varies with the phase saturation, k r = k r (S ), in accordance with the Brooks-Corey constitutive model (Corey, 1954). Following Taverniers et al (2020) and many others, we neglect the capillary forces, i.e., assume pressure within the two phases to be equal, P 1 = P 2 ≡ P (x, t); that is a common assumption in applications to reservoir engineering and carbon sequestration. The computational domain D is a 150 m × 150 m square (Fig.…”
Section: Computational Example: Multi-phase Flowmentioning
confidence: 99%
See 2 more Smart Citations
“…In many applications, discrete samples of a continuous, and potentially complex, random process are generated as output, even though a continuous solution is desired. Some examples are given by particle-tracking of passive solute transport (e.g., Fernàndez-Garcia & Sanchez-Vila, 2011;Pedretti & Fernàndez-Garcia, 2013;Siirila-Woodburn et al, 2015;Carrel et al, 2018), reactive particle transport (e.g., Ding et al, 2012Ding et al, , 2017Schmidt et al, 2017;Sole-Mari et al, 2017;Sole-Mari et al, 2019;Sole-Mari & Fernàndez-Garcia, 2018;Benson et al, 2019;Perez et al, 2019;Engdahl et al, 2017Engdahl et al, , 2019, and Monte Carlo simulation (e.g., Taverniers et al, 2020). A long history of statistical estimation has sought to best fit some continuous density function to a sequence of random samples, including maximum likelihood estimation (Brockwell & Davis, 2016) and kernel density estimation (Silverman, 1986).…”
Section: Introductionmentioning
confidence: 99%