Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Quantification of uncertainty in production forecasting is an important aspect of reservoir simulation studies. The uncertainty in the forecasting stems from the uncertainties of various model-input parameters, such as permeability, porosity, relative permeability endpoints, etc. Traditionally, the outcome of history matching is a set of parameter values that result in a good match of the historical production data. Clearly, the history matching process will be even more valuable if the uncertainties of these model-input parameters can be quantified in the process. In this paper, we present a systematic history matching approach to condition a reservoir model to production data and quantify the uncertainties of history matching parameters in terms of probability density functions. The new approach utilizes experimental design and multi-objective global optimization techniques. More specifically, for a given list of uncertain parameters, the history matching process is treated as a combinatorial optimization problem to find the best combination of these parameters to achieve the minimum history match error. The combinatorial optimization problem is solved by applying a hybrid metaheuristic method that combines evolutionary algorithms, Tabu search, and experimental design techniques. In the optimization process, reservoir models containing different combinations of parameter values are automatically generated to cover a wide range of possibilities based on the principles of experimental design and Tabu search. The search space of the optimization problem is gradually reduced by adopting the natural selection mechanism to discard parameter values that do not fit field data. Finally, the posterior probability density functions of the uncertain parameters are estimated by applying Bayesian theory. The proposed methodology is demonstrated in a real field case study of a complex oil field, which has 12 production wells and 10 years of production history. Some of the wells in the reservoir are found to be difficult to match using the traditional manual history matching approach. After applying the new approach, all the well histories are successfully matched. More importantly, the posterior probability density functions of uncertain parameters are estimated in the history matching process. The results can be further used to quantify the uncertainty in the production forecasting of follow-up recovery processes. Introduction Traditionally, history matching is done manually by varying a few reservoir parameters until a satisfactory match is obtained. It is often the most tedious and time-consuming task in a reservoir simulation study project. Limited by the time frame available, the manual trial-and-error approach usually leads only to a single matched model and provides very little information on the uncertainties of the model. History matching is by nature a very complex non-linear and ill-posed inverse problem. Like most inverse problems, it is characterized by the non-uniqueness of the solution, which means that different combinations of model parameter values may yield similar acceptable matches of the reservoir historical data. In order to achieve a better understanding of the uncertainties of the reservoir, it is necessary to obtain as many multiple good matches as possible in the history matching process1–3.
Quantification of uncertainty in production forecasting is an important aspect of reservoir simulation studies. The uncertainty in the forecasting stems from the uncertainties of various model-input parameters, such as permeability, porosity, relative permeability endpoints, etc. Traditionally, the outcome of history matching is a set of parameter values that result in a good match of the historical production data. Clearly, the history matching process will be even more valuable if the uncertainties of these model-input parameters can be quantified in the process. In this paper, we present a systematic history matching approach to condition a reservoir model to production data and quantify the uncertainties of history matching parameters in terms of probability density functions. The new approach utilizes experimental design and multi-objective global optimization techniques. More specifically, for a given list of uncertain parameters, the history matching process is treated as a combinatorial optimization problem to find the best combination of these parameters to achieve the minimum history match error. The combinatorial optimization problem is solved by applying a hybrid metaheuristic method that combines evolutionary algorithms, Tabu search, and experimental design techniques. In the optimization process, reservoir models containing different combinations of parameter values are automatically generated to cover a wide range of possibilities based on the principles of experimental design and Tabu search. The search space of the optimization problem is gradually reduced by adopting the natural selection mechanism to discard parameter values that do not fit field data. Finally, the posterior probability density functions of the uncertain parameters are estimated by applying Bayesian theory. The proposed methodology is demonstrated in a real field case study of a complex oil field, which has 12 production wells and 10 years of production history. Some of the wells in the reservoir are found to be difficult to match using the traditional manual history matching approach. After applying the new approach, all the well histories are successfully matched. More importantly, the posterior probability density functions of uncertain parameters are estimated in the history matching process. The results can be further used to quantify the uncertainty in the production forecasting of follow-up recovery processes. Introduction Traditionally, history matching is done manually by varying a few reservoir parameters until a satisfactory match is obtained. It is often the most tedious and time-consuming task in a reservoir simulation study project. Limited by the time frame available, the manual trial-and-error approach usually leads only to a single matched model and provides very little information on the uncertainties of the model. History matching is by nature a very complex non-linear and ill-posed inverse problem. Like most inverse problems, it is characterized by the non-uniqueness of the solution, which means that different combinations of model parameter values may yield similar acceptable matches of the reservoir historical data. In order to achieve a better understanding of the uncertainties of the reservoir, it is necessary to obtain as many multiple good matches as possible in the history matching process1–3.
We present theory and application of a new approach for assisted history matching and uncertainty assessment. In a Bayesian framework the a priori geological information is conditioned to the production history to give the posterior probability distribution function (pdf). The full posterior pdf is explored to assess the uncertainty through ensembles of reservoir models sampled by a Markov chain Monte Carlo algorithm. To achieve this we construct proxy functions for the output of the flow simulator for all measurements that enter a global objective function. The proxy functions are constructed using polynomials and multi dimensional kriging. An iterative loop, in which ensembles of reservoir models are sampled from the posterior pdf, is run to improve the quality of the proxy functions. The power of the application is demonstrated on two reservoir models. First we apply the method on a synthetic case. A modified reservoir simulation model from a small StatoilHydro operated oil field is investigated with a synthetic production history and 20 tuning parameters. Finally we apply the method to the StatoilHydro operated Heidrun field. A model which covers the upper formations of the field with 26 production wells, 11 injector wells, and 56 tuning parameters is conditioned to 11 years of production history. We show that it is possible to construct proxy functions accurate enough to describe the full posterior pdf and thereby assess the uncertainty associated with these reservoir models. Introduction Computer assisted history matching has been a topic for decades, and several strategies for minimizing a global objective function with as little computational effort as possible have been developed. Recently more focus has been put on risk management and uncertainty assessment, and the dangers of basing decisions on a single "base case" reservoir simulation model are widely recognized. The history matching and the uncertainty assessment challenges may be united within a Bayesian framework. Geological knowledge is used as prior information, which is conditioned to the historic production data to give a posterior probability distribution function (pdf). The reservoir models sampled from this posterior pdf will be history matched in the sense that the simulated responses are focused around the observed responses within a prescribed error tolerance. A popular approach has been to perform a global search for the reservoir model that maximizes the posterior pdf. However, the most probable model does not need to be representative, and to assess the uncertainty one needs to sample from the full posterior pdf. Several strategies of exploring the full posterior pdf exist, and the most prominent example is perhaps the Ensemble Kalman Filter (EnKF) (Evensen 2006; Evensen et al. 2007; Gaoming Li and Reynolds 2007; Haugen et al. 2006). The EnKF method starts out with an ensemble sampled from the a priori pdf. The ensemble is then approximately conditioned to the measurements sequentially under the assumption that the underlying fields are Gaussian. Randomized Maximum Likelihood (RML) methods (Kitanidis 1995; Ning et al. 2001), is another class of methods which samples approximately from a full posterior pdf using a Gaussian assumption. In the RML methods each member of an ensemble sampled from the a priori pdf are "history matched" to form a posterior ensemble. Characterizing the full posterior pdf by running the flow simulator in a Monte Carlo loop is prohibitively computationally demanding, and only a few attempts on synthetic models (Floris et al. 2001; Barker et al. 2001; Hegstad and Omre 2001; Oliver et al. 1996) have been reported. Monte Carlo sampling schemes require an overwhelming number of steps as they only sample asymptotically correct. Reduced physics models and stream line simulators have been proposed as means to lower the computational cost (Ma et al. 2006; Mau??ec et al. 2007), but even these fast simulators are too slow to enable reliable sampling of the posterior pdf for real field cases.
This paper presents a comparative study of proxy-modeling methodology (also known as surrogate modeling or metamodeling) as a computationally cheap alternative to full numerical simulation in assisted history matching, production optimization and forecasting. The study demonstrates the solution space complexity for different simulation models and the applicability of the proxy-models to mimic it. Focus is given to the practical aspects of model construction and to the limitations of which engineers should be aware. Results of stochastic optimization driven by full numerical simulation are compared to the proxy-model solutions in order to demonstrate strengths and weaknesses of each approach and determine desirable areas of application. Several simulation models of different complexity were used to demonstrate the impact of model structure, number of uncertainty parameters and type of problem on simulation model response and on efficiency of proxy-model application. The results are presented for different datasets, proxy-models and simulation model outputs to demonstrate the dependence of the approximation quality on these parameters. The dependence of the proxy-model prediction quality on sampling method, and on uncertainty domain complexity has been revealed. The efficiency of proxy-model application in history matching and production optimization was compared to stochastic optimization with full reservoir simulation. The results of this study have demonstrated that with increasing complexity of the solution space and number of uncertainties, the application of the proxy-modeling methodology is not recommended for history matching. In the history matching case, the use of full reservoir simulations, combined with stochastic search methods, is preferable and, above a certain level of complexity, the only acceptable solution. Nevertheless, proxy-modeling might be a good approach for certain production optimization projects and appropriate tool for forecasting of Hydrocarbons Initially In Place (HCIIP) and oil recovery (OR). This study suggests areas of application for proxy-models and full numerical simulation. It addresses pros and cons of both approaches in reservoir simulation and provides advice for their efficient application. Introduction Recent progress in computational hardware and software development has opened new frontiers in reservoir modeling. However, for many workflows in uncertainty quantification and optimization with application to reservoir simulation the availability of computing resources is still seen as a limiting factor. Therefore, engineers are still looking for a ways to reduce the computational load related to simulation studies, so application of computationally efficient proxy-models gains a lot of attention. In this paper we refer to a "proxy-model" as a mathematically or statistically defined function that replicates the simulation model output for selected input parameters. The terms "response surface model", "meta-model" and "surrogate model" are sometimes used as alternatives to "proxy-model". However, proxy-model seems to be more accepted in the petroleum industry and will be used in this paper. Proxy-models are widely applied in different areas of science for numeric modeling approximation. Typical application areas in reservoir simulation include:Sensitivity analysis of uncertainty variables;Probabilistic forecasting and risk analysis;Conditioning of a simulation model to historically observed data (history matching);Field development planning and production optimization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.