SPE Annual Technical Conference and Exhibition 2016
DOI: 10.2118/181325-ms
|View full text |Cite
|
Sign up to set email alerts
|

Ensemble-Based Assisted History Matching With Rigorous Uncertainty Quantification Applied to a Naturally Fractured Carbonate Reservoir

Abstract: This paper presents an ensemble-based computer Assisted History Matching (AHM) of a real life carbonate oil field. The field-level reservoir pressures were matched with a fine-scale Dual-Porosity DualPermeability (DPDP) model spanning a long production history under primarily peripheral water injection pressure support. The well-level AHM workflow presented was validated with a DPDP high-resolution sector model of a fracture dominated carbonate reservoir. This sector model was ~17 million active… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(5 citation statements)
references
References 42 publications
0
5
0
Order By: Relevance
“…Among the iterative forms of ES, the ensemble smoother with multiple data assimilation (ES-MDA) [15] is a popular choice. The popularity of ES-MDA can be attributed mainly to its good performance in history-matching problems [16,9,33,20] and its simplicity of implementation. In fact, ES-MDA is essentially equivalent to repeat ES a few times with the data-error covariance matrix, C e , multiplied by coefficients α k 's to avoid overweighing the measurements.…”
Section: Data-space Inversion With Ensemble Smoothermentioning
confidence: 99%
See 1 more Smart Citation
“…Among the iterative forms of ES, the ensemble smoother with multiple data assimilation (ES-MDA) [15] is a popular choice. The popularity of ES-MDA can be attributed mainly to its good performance in history-matching problems [16,9,33,20] and its simplicity of implementation. In fact, ES-MDA is essentially equivalent to repeat ES a few times with the data-error covariance matrix, C e , multiplied by coefficients α k 's to avoid overweighing the measurements.…”
Section: Data-space Inversion With Ensemble Smoothermentioning
confidence: 99%
“…Among these methods, the ones based on the ensemble Kalman filter (EnKF) [17,19] have become quite popular, especially because of their ease of implementation and integration with commercial reservoir simulators and the ability to generate multiple models with large number of uncertainty parameters at an affordable computational cost. Despite the relative success in a number of recent field cases reported in the literature; see, for example, [16,7,9,33,1,20,28], generating a set of models properly conditioned to all historical data and still preserving the geological realism is very challenging, especially in cases with complicated prior description, such as models with fractures and complex facies distributions.…”
Section: Introductionmentioning
confidence: 99%
“…ES-MDA requires to define the number of data assimilations in advance. The results presented in (Emerick and Reynolds, 2013b,c;Emerick, 2016;Maucec et al, 2016) indicate that few data assimilations suffice for practical history-matching problems. Here, we use four data assimilations which is our typical choice.…”
Section: History Matchingmentioning
confidence: 96%
“…The ES-MDA is an iterative assimilation scheme that uses the same ES formulation, assimilating the same data multiple times (in multiple assimilation steps, or iterations) with the addition of an inflation factor in order to damp each iteration. There are several ES-MDA applications in large-scale reservoirs history matching, e.g., Maucec et al [16], Breslavich et al [17], Emerick [18] and Morosov and Schiozer [19]. The need to define the number of iterations and the inflation factors before the assimilation process is one of the main drawbacks of the ES-MDA in its standard form: It is necessary to restart the entire process if the results quality is not desirable after the end of the algorithm.…”
Section: Introductionmentioning
confidence: 99%