Summary When performing classic uncertainty reduction according to dynamic data, a large number of reservoir simulations need to be evaluated at high computational cost. As an alternative, we construct Bayesian emulators that mimic the dominant behavior of the reservoir simulator, and which are several orders of magnitude faster to evaluate. We combine these emulators within an iterative procedure that involves substantial but appropriate dimensional reduction of the output space (which represents the reservoir physical behavior, such as production data), enabling a more effective and efficient uncertainty reduction on the input space (representing uncertain reservoir parameters) than traditional methods, and with a more comprehensive understanding of the associated uncertainties. This study uses the emulation-based Bayesian history-matching (BHM) uncertainty analysis for the uncertainty reduction of complex models, which is designed to address problems with a high number of both input and output parameters. We detail how to efficiently choose sets of outputs that are suitable for emulation and that are highly informative to reduce the input-parameter space and investigate different classes of outputs and objective functions. We use output emulators and implausibility analysis iteratively to perform uncertainty reduction in the input-parameter space, and we discuss the strengths and weaknesses of certain popular classes of objective functions in this context. We demonstrate our approach through an application to a benchmark synthetic model (built using public data from a Brazilian offshore field) in an early stage of development using 4 years of historical data and four producers. This study investigates traditional simulation outputs (e.g., production data) and also novel classes of outputs, such as misfit indices and summaries of outputs. We show that despite there being a large number (2,136) of possible outputs, only very few (16) were sufficient to represent the available information; these informative outputs were used using fast and efficient emulators at each iteration (or wave) of the history match to perform the uncertainty-reduction procedure successfully. Using this small set of outputs, we were able to substantially reduce the input space by removing 99.8% of the original volume. We found that a small set of physically meaningful individual production outputs were the most informative at early waves, which once emulated, resulted in the highest uncertainty reduction in the input-parameter space, while more complex but popular objective functions that combine several outputs were only modestly useful at later waves. The latter point is because objective functions such as misfit indices have complex surfaces that can lead to low-quality emulators and hence result in noninformative outputs. We present an iterative emulator-based Bayesian uncertainty-reduction process in which all possible input-parameter configurations that lead to statistically acceptable matches between the simulated and observed data are identified. This methodology presents four central characteristics: incorporation of a powerful dimension reduction on the output space, resulting in significantly increased efficiency; effective reduction of the input space; computational efficiency, and provision of a better understanding of the complex geometry of the input and output spaces.
Reservoir simulation models incorporate physical laws and reservoir characteristics. They represent our understanding of sub-surface structures based on the available information. Emulators are statistical representations of simulation models, offering fast evaluations of a sufficiently large number of reservoir scenarios, to enable a full uncertainty analysis. Bayesian History Matching (BHM) aims to find the range of reservoir scenarios that are consistent with the historical data, in order to provide comprehensive evaluation of reservoir performance and consistent, unbiased predictions incorporating realistic levels of uncertainty, required for full asset management. We describe a systematic approach for uncertainty quantification that combines reservoir simulation and emulation techniques within a coherent Bayesian framework for uncertainty quantification. Our systematic procedure is an alternative and more rigorous tool for reservoir studies dealing with probabilistic uncertainty reduction. It comprises the design of sets of simulation scenarios to facilitate the construction of emulators, capable of accurately mimicking the simulator with known levels of uncertainty. Emulators can be used to accelerate the steps requiring large numbers of evaluations of the input space in order to be valid from a statistical perspective. Via implausibility measures, we compare emulated outputs with historical data incorporating major process uncertainties. Then, we iteratively identify regions of input parameter space unlikely to provide acceptable matches, performing more runs and reconstructing more accurate emulators at each wave, an approach that benefits from several efficiency improvements. We provide a workflow covering each stage of this procedure. The procedure was applied to reduce uncertainty in a complex reservoir case study with 25 injection and production wells. The case study contains 26 uncertain attributes representing petrophysical, rock-fluid and fluid properties. We selected phases of evaluation considering specific events during the reservoir management, improving the efficiency of simulation resources use. We identified and addressed data patterns untracked in previous studies: simulator targets, e.g. liquid production, and water breakthrough lead to discontinuities in relationships between outputs and inputs. With 15 waves and 115 valid emulators, we ruled out regions of the searching space identified as implausible, and what remained was only a small proportion of the initial space judged as non-implausible (~10−11%). The systematic procedure showed that uncertainty reduction using iterative Bayesian History Matching has the potential to be used in a large class of reservoir studies with a high number of uncertain parameters. We advance the applicability of Bayesian History Matching for reservoir studies with four deliveries: (a) a general workflow for systematic BHM, (b) the use of phases to progressively evaluate the historical data; and (c) the integration of two-class emulators in the BHM formulation. Finally, we demonstrate the internal discrepancy as a source of error in the reservoir model.
This paper proposes new objective functions to assimilate dynamic data for history matching, and evaluates their influence on the uncertainty conditioning. Representative events are observed and evaluated separately for the available dynamic data. The proposed objective functions evaluate two specific events: (1) the production transition behavior between the historical and forecasting period, and (2) the water breakthrough time. To assess production transition behavior, the deviation between the latest available historical data is compared with the forecast value, at a specific moment, under forecasting conditions. To assess water breakthrough, the irruption time error is measured in addition to the water-rate objective function. The new objective functions are normalized using the Normalized Quadratic Deviation with Sign, for comparison with conventional objective functions (i.e. NQDS-oil production rate). These additional objective functions are included in a probabilistic and multi-objective history matching and applied to the UNISIM-I-M benchmark for validation. Two history-matching procedures evaluate the impact of the additional objective functions, based on the same parameterization, boundary conditions and number of iterations. The first procedure (Procedure A) includes objective functions traditionally used such as fluid rates and bottom-hole pressure, computed using all the historical data points. The second procedure (Procedure B) considers the same as for A as well as the two additional objective functions. The advantages of including the additional objective functions was the supplementary data used to constrain the uncertainties, improving attribute updates. Consequently, Procedure B generated better-matched models considering the historical period and more consistent forecasts for both field and well behavior when compared to available reference data. The addition of the breakthrough deviation improved the quality of the match for water rates because breakthrough deviation is sensitive to reservoir attributes different to those objective functions related to water rate. The production transition error assisted the identification of scenarios that under or overestimated well capacity. Production transition error also improved the transition of the models from the historical to the forecasting period, reducing fluctuations due to the changes in boundary conditions. Despite the increased number of objective functions to be matched, the improved reliability for forecasting is an incentive for further study. Other representative events, such as oil rate before and after the start of water production could be separated and evaluated, for example. The improved reliability for forecasting supports the inclusion of the proposed objective functions in history-matching procedures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.