Experimental design method is an alternative to traditional sensitivity analysis. The basic idea behind this methodology is to vary multiple parameters at the same time so that maximum inference can be attained with minimum cost. Once the appropriate design is established and the corresponding experiments (simulations) are performed, the results can be investigated by fitting them to a response surface. This surface is usually an analytical or a simple numerical function which is cheap to sample. Therefore it can be used as a proxy to reservoir simulation to quantify the uncertainties. Designing an efficient sensitivity study poses two main issues:Designing a parameter space sampling strategy and carrying out experiments.Analyzing the results of the experiments. (Response surface generation) In this paper we investigate these steps by testing various experimental designs and response surface methodologies on synthetic and real reservoir models. We compared conventional designs such as Plackett-Burman, central composite and D-optimal designs and a space filling design technique that aim at optimizing the coverage of the parameter space. We analyzed these experiments using linear, second order polynomials and more complex response surfaces such as kriging, splines and neural networks. We compared these response surfaces in terms of their capability to estimate the statistics of the uncertainty (i.e., P10, P50 and P90 values), their estimation accuracy and their capability to estimate the influential parameters (heavy-hitters). Comparison with our exhaustive simulations showed that experiments generated by the space filling design and analyzed with kriging, splines and quadratic polynomials gave the greatest accuracy while traditional designs and the associated response surfaces performed poorly for some of the cases we studied. We also found good agreement between polynomials and complex response surfaces in terms of estimating the effect of each parameter on the response surface. Introduction Reservoir simulators are capable of integrating detailed static geological information with dynamic engineering data to represent the complex fluid flow in porous media. Therefore they have been used extensively for planning and evaluation of field development projects. Usually economical parameters such as net present value (NPV) or recovery estimates such as cumulative oil production are used to assess the value of different alternatives of a development study. Since most of the inputs to the simulation studies are usually uncertain and uncontrollable (like static reservoir properties), many sensitivity studies have to be performed, which might be prohibitive due to costly simulations. Experimental design methodology offers not only an efficient way of assessing uncertainties by providing inference with minimum number of simulations, but also can identify the key parameters governing uncertainty in economic and production forecast, which might guide the data acquisition strategy during the early phases of a field development project.[1] The commonly used workflow for this purpose is as follows:Define a large set of potential key parameters and their probability distributions.Perform a low level experimental design study, such as Plackett-Burman, which combines the high and low value of the key parameters.Perform simulations corresponding to each of the experiments.Fit the economical or recovery estimates obtained from simulations to a simple response surface, which is usually a line.Using the probability distributions attached to the parameters, perform a Monte Carlo simulation on the response surfaceGenerate a tornado diagram to rank the effect of each parameter on the economical or recovery estimates.Screen the heavy-hitters. From the tornado diagram.Perform a more detailed design such as full/fractional factorial, D-optimal, Box-Behnken, central composite, etc. with the heavy-hitters.Repeat steps 3 and 4.Perform a Monte Carlo simulation on the new response surface to get the probability density function (pdf) of the economical or recovery estimates.
Automatic history matching is the process of calibrating the parameters of a reservoir model by way of an automated algorithm so that the reservoir observations are close to calculated values. Automatic history matching has been heavily investigated, however a fraction of these investigations that provided ways to assess the uncertainties associated with predictions has been computationally prohibitive. In this study an approach based on response surfaces and sensitivity coefficients is presented that is not only a computationally efficient history matching process, but also provides a framework for the assessment of the uncertainties associated with prediction. In this study, sensitivity coefficients are used to construct a response surface or a proxy for the simulator, honoring the exact data values and gradients for the simulated combinations of the parameters. This proxy is then used to guide the selection of the subsequent locations to sample for the history matching process. The accuracy of the proxy increases with the additional simulations as the algorithm progresses. The proxy revealed at the end of the history matching process is then utilized to estimate the uncertainty associated with the predictions of the future performance of the reservoir model. The proposed method is applied to a synthetic well-testing example and a real field case for history matching and uncertainty estimation. Introduction Calibration or conditioning of reservoir simulation models to the historical production data, or in short, history matching, is a required step before a reservoir model can be accepted to be used for making predictions of the future performance of the reservoir. It has been long recognized that history matching is not only a difficult problem to solve but it is also a nonunique inverse problem. Nonuniqueness means that there are multiple combinations of model parameters that all lead to acceptable representations of the history of the reservoir. Although these parameters produce similar results that mimic the past performance of the reservoir, they may produce a range of results when it comes to the prediction of future performance. This range of solutions in future predictions results from the nonuniqueness of the history matching process and corresponds to the uncertainty in the predictions which are critical input in business decisions. Therefore it would be desirable to have a process designed not only to provide an acceptable match to the historical production data but also to quantify the uncertainty associated with the predictions performed with the calibrated model. This problem is currently the subject of intensive research1–6. The approach presented in this study attempts to deliver multiple solutions that all match history and also provide the uncertainties associated with prediction. Some of the methods used in this approach are sensitivity coefficients, response surfaces, optimization and experimental design. There is extensive literature around each of technologies. Only a brief description of these technologies and their relevance to this study will be presented here. There is an abundance of automated and assisted history matching methods published in the petroleum literature. A significant portion of these approaches utilize gradient type algorithms7 to minimize an objective function which is generally defined as a variation of the difference between the historical production data of the field and the reservoir performance calculated with a reservoir simulator. The gradient of the objective function can be calculated efficiently either by using adjoint equations8,9 or by way of sensitivity coefficients10–17. Sensitivity coefficients are defined as the derivatives of the output of the simulator with respect to the parameters being adjusted to get a history match. Sensitivity coefficients are generally used as input for the highly efficient Gauss-Newton or Levenberg-Marquardt algorithms. Sensitivity coefficients have also been used to estimate the uncertainties associated with predictions with history matching models18,19. Although the gradient based algorithms are efficient, their shortcomings are that they may get trapped in local minima, and that they provide a single solution despite the fact there are multiple acceptable solutions that may be significantly diverse.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.