Understanding and capturing the uncertainty of the reservoir are keys to predicting its performance and making operational decisions. Conventional industry practices with a single or three (high-mid-low case) models have little ability to describe the full complexity of subsurface uncertainty and often yield poor performance in forecasting. To improve our understanding of the effect of reservoir uncertainty in performance, we need to use an ensemble of models which spans the full space of the uncertain parameters. These parameters may range widely from global parameters, such as water-oil contact and fault transmissibility, to cell-based properties, such as heterogeneous permeability and porosity. While it is ideal to explore all the possible parameter combinations, doing so can easily result in millions of models and become impractical for history matching and forecasting purposes. In this work we present a two-step history matching workflow where the uncertainties in the local heterogeneity and global parameters are investigated sequentially and yet in a manageable manner. The workflow begins with a geological model built based upon available information. The local geological heterogeneity, which cannot be readily determined from the information such as seismic images or well logs, is examined in the first step of the workflow. We create an ensemble of 103 to 105 models which span the uncertainty space for properties like permeability, porosity and/or net-to-gross but all are constrained to geostatistical data such as ranges, standard deviations and variograms. To effectively reduce the ensemble size, we implement the dynamic fingerprinting technique, a method based on streamline information (time of flight or drainage time), to screen and cluster the models. The concept behind this methodology is that for each distinct property realization, a given production schedule will generate a flow pattern which, like a fingerprint, is unique to that realization. This method is highly efficient since the time required to obtain the characteristic flow pattern is significantly shorter than the time of interest (typically the whole production history). The fingerprints from individual realizations are collected and clustered according to their principal flow pattern through single value decomposition. Each cluster aggregates a set of model realizations that despite their apparent difference in the model space, all correspond to similar principal flow trends. A single representative is then chosen from each cluster. The second step of the workflow aims to examine the uncertainties of the global parameters. For each representative obtained from the first step, we implement the standard workflow for Design of Experiment and proxy modeling to construct response surfaces as functions of global parameters. Algorithms such as Markov Chain Monte Carlo are then implemented to perform vast sampling and condition the models to the history data. The end result is a small set of models that are based on realistic geology, preserve flow-relevant subsurface uncertainty, and are conditioned to production data. The proposed workflow, which can be referred to as the probability history matching (PHM) workflow, provides an efficient and effective way to select representatives and condition to historical data. The selected models can be used for making forecasts and support development planning under uncertainty. An application of this workflow is shown on a real-world field example.
Reliable reservoir uncertainty estimation is crucial to understanding its valuation and making robust decisions. Conventional practices where history matching and production forecast are performed on selected high-mid-low cases do not provide a reliable estimation of forecast uncertainty. This is typically reflected in a narrow range for ultimate recovery (UR) or net present value (NPV) predictions. In order to capture the inherent subsurface uncertainty, it is necessary to use an ensemble of models which spans the full uncertainty space. The Probabilistic History Matching (PHM) workflow is an ensemble-based workflow, aimed to improve forecasting uncertainty estimation. One of the biggest challenges is that it typically requires a large ensemble size to span the uncertainty space due to the limited information (e.g. core data or well logs). This requirement may render the Assisted History Matching (AHM) exercise infeasible when computational resources are relatively limited. Therefore, it is necessary to reduce the number of models to a manageable size prior to performing AHM. Here we implement the Dynamic Fingerprinting workflow, to effectively select a set of representative models from the ensemble while preserving the uncertainty of the variables of interest. In this methodology, time-of-flight (TOF) and drainage time (DRT) information, which are direct estimates of swept and undrained volumes are used to characterize each model. A small subset of models is then selected based on their dissimilarity in flow pattern and used for AHM/forecasting. The workflow was applied to a deepwater West Africa reservoir. An ensemble of 810 models was generated to represent the subsurface uncertainty. Ten models which were highly dissimilar in flow response were selected from the ensemble for AHM and estimating forecast uncertainties. The AHM was performed using an Experiment Design (ED) - Response Surface Modeling (RSM) - Markov-Chain Monte-Carlo (MCMC) workflow. For validation purposes, a different AHM workflow was performed on each of the 810 models using a derivative-free optimization algorithm. The comparison between the results supports the choice of the representatives from the Dynamic Fingerprinting work flow as well as the history matching conclusions from the ED-RSM-MCMC workflow.
Increased access to computational resources has allowed reservoir engineers to include assisted history matching (AHM) and uncertainty quantification (UQ) techniques as standard steps of reservoir management workflows. Several advanced methods have become available and are being used in routine activities without a proper understanding of their performance and quality. This paper provides recommendations on the efficiency and quality of different methods for applications to production forecasting, supporting the reservoir-management decision-making process. Results from five advanced methods and two traditional methods were benchmarked in the study. The advanced methods include a nested sampling method MultiNest, the integrated global search Distributed Gauss-Newton (DGN) optimizer with Randomized Maximum Likelihood (RML), the integrated local search DGN optimizer with a Gaussian Mixture Model (GMM), and two advanced Bayesian inference-based methods from commercial simulation packages. Two traditional methods were also included for some test problems: the Markov-Chain Monte Carlo method (MCMC) is known to produce accurate results although it is too expensive for most practical problems, and a DoE-proxy based method widely used and available in some form in most commercial simulation packages. The methods were tested on three different cases of increasing complexity: a 1D simple model based on an analytical function with one uncertain parameter, a simple injector-producer well pair in the SPE01 model with eight uncertain parameters, and an unconventional reservoir model with one well and 24 uncertain parameters. A collection of benchmark metrics was considered to compare the results, but the most useful included the total number of simulation runs, sample size, objective function distributions, cumulative oil production forecast distributions, and marginal posterior parameter distributions. MultiNest and MCMC were found to produce the most accurate results, but MCMC is too costly for practical problems. MultiNest is also costly, but it is much more efficient than MCMC and it may be affordable for some practical applications. The proxy-based method is the lowest-cost solution. However, its accuracy is unacceptably poor. DGN-RML and DGN-GMM seem to have the best compromise between accuracy and efficiency, and the best of these two is DGN-GMM. These two methods may produce some poor-quality samples that should be rejected for the final uncertainty quantification. The results from the benchmark study are somewhat surprising and provide awareness to the reservoir engineering community on the quality and efficiency of the advanced and most traditional methods used for AHM and UQ. Our recommendation is to use DGN-GMM instead of the traditional proxy-based methods for most practical problems, and to consider using the more expensive MultiNest when the cost of running the reservoir models is moderate and high-quality solutions are desired.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.