Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
One of the main concern in the O&G business is generating reliable production profile forecasts. Such profiles are the cornerstones of optimal technico-economical management decisions. A workflow combining different methodologies to integrate and reduce most of the subsurface uncertainties using multiple history matched models (explaining the past) to infer reasonably reliable production forecasts is proposed. Using experimental design theory, a sensitivity study is first performed to scan the whole range of static and dynamic uncertain parameters using a proxy-model of the fluid flow simulator. Only the most sensitive ones with respect to an objective function (quantifying the mismatch between the simulation results and the observations) are retained for subsequent steps. Assisted History Matching tools are then used to get multiple History matched models, an order of magnitude faster than traditional History Matching processes. Updated uncertain parameters (selected from the sensitivity studies) may be picked anywhere in the direct problem building workflow. Using the Bayesian framework, a posterior distribution of the most sensitive parameters are derived from the a priori distributions and a non-linear proxy model of the likelihood function. The later is computed using experimental design, kriging and dynamic training techniques. Multiple History Matched models together with a posteriori parameter distributions are finally used in a joint modeling approach to capture the main uncertainties and to obtain typical (P10-P90) probabilistic production profiles. This workflow has been applied to a gas storage real case submitted to significant seasonal pressure variations. Probabilistic operational pressure profiles for a given period can then be compared to the actual gas storage dynamic behaviour to assess the added value of the proposed workflow. Introduction Getting probabilistic production forecasts of a reservoir through a risk analysis is closely linked to uncertainty quantification 1. Uncertainty quantification should end with a posteriori uncertain parameter distributions, reflecting all the knowledge that we have on the reservoir and explaining as much as possible the observation data. The uncertainties may result both from the observation data quality as well as from the numerical modeling steps. On top of that, the fluid flows equations we are dealing with are non linear, which leads to possible complex hydrodynamical behaviour. CPU constraints make inefficient brute force Monte Carlo sampling of the a posteriori distributions from a priori ones. Most of the current approaches try to scan more efficiently (in a given timeframe) the uncertain parameters space using reliable proxy models 2 of the fluid flow simulator and/or to take advantage of new computational power 3,4,5,6. Due to the numerous uncertain parameters involved in the history matching process, a screening of the parameters space is first performed (i.e. a sensitivity study) to retain only the most sensitive ones with respect to the history matching criteria, possibly reducing the variation intervals. This is a pre-processing step of the history matching job.
One of the main concern in the O&G business is generating reliable production profile forecasts. Such profiles are the cornerstones of optimal technico-economical management decisions. A workflow combining different methodologies to integrate and reduce most of the subsurface uncertainties using multiple history matched models (explaining the past) to infer reasonably reliable production forecasts is proposed. Using experimental design theory, a sensitivity study is first performed to scan the whole range of static and dynamic uncertain parameters using a proxy-model of the fluid flow simulator. Only the most sensitive ones with respect to an objective function (quantifying the mismatch between the simulation results and the observations) are retained for subsequent steps. Assisted History Matching tools are then used to get multiple History matched models, an order of magnitude faster than traditional History Matching processes. Updated uncertain parameters (selected from the sensitivity studies) may be picked anywhere in the direct problem building workflow. Using the Bayesian framework, a posterior distribution of the most sensitive parameters are derived from the a priori distributions and a non-linear proxy model of the likelihood function. The later is computed using experimental design, kriging and dynamic training techniques. Multiple History Matched models together with a posteriori parameter distributions are finally used in a joint modeling approach to capture the main uncertainties and to obtain typical (P10-P90) probabilistic production profiles. This workflow has been applied to a gas storage real case submitted to significant seasonal pressure variations. Probabilistic operational pressure profiles for a given period can then be compared to the actual gas storage dynamic behaviour to assess the added value of the proposed workflow. Introduction Getting probabilistic production forecasts of a reservoir through a risk analysis is closely linked to uncertainty quantification 1. Uncertainty quantification should end with a posteriori uncertain parameter distributions, reflecting all the knowledge that we have on the reservoir and explaining as much as possible the observation data. The uncertainties may result both from the observation data quality as well as from the numerical modeling steps. On top of that, the fluid flows equations we are dealing with are non linear, which leads to possible complex hydrodynamical behaviour. CPU constraints make inefficient brute force Monte Carlo sampling of the a posteriori distributions from a priori ones. Most of the current approaches try to scan more efficiently (in a given timeframe) the uncertain parameters space using reliable proxy models 2 of the fluid flow simulator and/or to take advantage of new computational power 3,4,5,6. Due to the numerous uncertain parameters involved in the history matching process, a screening of the parameters space is first performed (i.e. a sensitivity study) to retain only the most sensitive ones with respect to the history matching criteria, possibly reducing the variation intervals. This is a pre-processing step of the history matching job.
History matching is an integral part of reservoir production forecasting, risk analysis and uncertainties quantification workflows. One has to cope with the non-uniqueness issue as history matching is an ill-posed inverse problem, due to a lack in constraints and data. Dealing with several history matched models is therefore critical and assisted history matching tools are of great interest to speed up the process. In practise, structural as well as petrophysical, PVT, SCAL, etc. data may be highly uncertain and the history matching process rarely tackles all these parameters in a single step. Classically, some of these parameters are considered as known while others are updated. This constitutes the 'by default' approach as all these parameters are interdependent and it may lead to sub-optimal history matched models. This manuscript presents an original history matching workflow that picks uncertain structural and petrophysical parameters anywhere in the "geomodeling to simulation" workflow, using a popular geomodeling software. Efficient parameterization technique of the geological model allows both geological and simulation models to be updated at the same time, preserving the consistency between each other. Using a versatile assisted history matching software, any external software such as a geomodeling software, may be automatically launched in batch mode from the constructed workflow. Background scripts then control each building step of the geological and simulation models, possibly capitalizing on an existing geomodel. This joint structural and petrophysical history matching leads to a more robust integrated geological stochastic reservoir model, as all uncertainties are simultaneously tackled and reduced. The results obtained on a 3D faulted synthetic waterflooding scenario demonstrate that this history matching approach is efficient since horizon depths, throw and transmissivity of faults as well as facies distribution, petrophysical and SCAL properties are simultaneously updated to explain the production history. Introduction One of the main outputs of reservoir engineering technical studies is to get reliable production forecasts. Within that framework, history matching of reservoir model(s) is pivotal but eventually not sufficient (Carter et al. 2006): key reservoir model inputs are updated until a satisfactory match is obtained between simulated and observed data. But the history matching process is an under-determined inverse problem. One will never gather enough data to constrain a unique reservoir model and potentially many models explain the data equally well. All of them should be considered for the production forecasts process. Moreover, the history matching criteria investigated has a non-smooth shape with many minima. This is a consequence of geological modeling and multiphase fluid flow simulations, based on non-linear coupled equations. It makes more complex the optimization process associated to the history matching loop. This problem is exacerbated when dealing with facies modeling as well as structural inputs, which is the case in the proposed application. History matching of structurally complex reservoirs may appear challenging because the uncertainty in reservoir geometry may impact production forecasts order(s) of magnitude bigger than the petrophysical related one. Structural uncertainty may reside in the poor quality of seismic data. Seismic data processing, migration, interpretation results as well as time depth conversion are themselves not unique, relying on subjective choices. In such case, traditional history matching approach considers reservoir geometry as fixed during the optimization process, updating the sole petrophysical and fluids related ones. Considering some parameters as constant (and thus artificially no more uncertain) while updating remaining ones may lead to sub-optimal history matched models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.