Summary The optimization of field development plans (FDPs), which includes optimizing well counts, well locations, and the drilling sequence is crucial in reservoir management because it has a strong impact on the economics of the project. Traditional optimization studies are scenario specific, and their solutions do not generalize to new scenarios (e.g., new earth model, new price assumption) that were not seen before. In this paper, we develop an artificial intelligence (AI) using deep reinforcement learning (DRL) to address the generalizable field development optimization problem, in which the AI could provide optimized FDPs in seconds for new scenarios within the range of applicability. In the proposed approach, the problem of field development optimization is formulated as a Markov decision process (MDP) in terms of states, actions, environment, and rewards. The policy function, which is a function that maps the current reservoir state to optimal action at the next step, is represented by a deep convolution neural network (CNN). This policy network is trained using DRL on simulation runs of a large number of different scenarios generated to cover a “range of applicability.” Once trained, the DRL AI can be applied to obtain optimized FDPs for new scenarios at a minimum computational cost. While the proposed methodology is general, in this paper, we applied it to develop a DRL AI that can provide optimized FDPs for greenfield primary depletion problems with vertical wells. This AI is trained on more than 3×106 scenarios with different geological structures, rock and fluid properties, operational constraints, and economic conditions, and thus has a wide range of applicability. After it is trained, the DRL AI yields optimized FDPs for new scenarios within seconds. The solutions from the DRL AI suggest that starting with no reservoir engineering knowledge, the DRL AI has developed the intelligence to place wells at “sweet spots,” maintain proper well spacing and well count, and drill early. In a blind test, it is demonstrated that the solution from the DRL AI outperforms that from the reference agent, which is an optimized pattern drilling strategy almost 100% of the time. The DRL AI is being applied to a real field and preliminary results are promising. Because the DRL AI optimizes a policy rather than a plan for one particular scenario, it can be applied to obtain optimized development plans for different scenarios at a very low computational cost. This is fundamentally different from traditional optimization methods, which not only require thousands of runs for one scenario but also lack the ability to generalize to new scenarios.
Oil and gas field development optimization, which involves the determination of the optimal number of wells, their drilling sequence and locations while satisfying operational and economic constraints, represents a challenging computational problem. In this work, we present a deep-reinforcement-learning-based artificial intelligence agent that could provide optimized development plans given a basic description of the reservoir and rock/fluid properties with minimal computational cost. This artificial intelligence agent, comprising of a convolutional neural network, provides a mapping from a given state of the reservoir model, constraints, and economic condition to the optimal decision (drill/do not drill and well location) to be taken in the next stage of the defined sequential field development planning process. The state of the reservoir model is defined using parameters that appear in the governing equations of the two-phase flow (such as well index, transmissibility, fluid mobility, and accumulation, etc.,). A feedback loop training process referred to as deep reinforcement learning is used to train an artificial intelligence agent with such a capability. The training entails millions of flow simulations with varying reservoir model descriptions (structural, rock and fluid properties), operational constraints (maximum liquid production, drilling duration, and water-cut limit), and economic conditions. The parameters that define the reservoir model, operational constraints, and economic conditions are randomly sampled from a defined range of applicability. Several algorithmic treatments are introduced to enhance the training of the artificial intelligence agent. After appropriate training, the artificial intelligence agent provides an optimized field development plan instantly for new scenarios within the defined range of applicability. This approach has advantages over traditional optimization algorithms (e.g., particle swarm optimization, genetic algorithm) that are generally used to find a solution for a specific field development scenario and typically not generalizable to different scenarios. The performance of the artificial intelligence agents for two- and three-dimensional subsurface flow are compared to well-pattern agents. Optimization results using the new procedure are shown to significantly outperform those from the well pattern agents.
Production forecasting for oil and gas reservoir is highly uncertain due to various subsurface uncertainties. Rapid interpretation of measurement data and update of the probabilistic forecast results are crucial for reducing uncertainty and devising high-side capture or low-side mitigation plans. Traditional history-matching-forecasting workflow requires a long life-cycle and a large number of simulations. In this work, we propose a novel method to rapidly update the prediction S-curves given early production data without performing additional simulations or model updates after the data come in. The proposed method consists of several steps. Before the data come in, we perform an ensemble of simulations to calculate the correlation between the measurement data (e.g. BHP) and the business objective (e.g. EUR). After the data come in, we first perform model validation step based on Mahalanobis distance and statistical testing to determine the validity of the model given observation data. If the model passed the validation test, principal component analysis (PCA) will be applied to precondition the observation data to detect and modify the response which cannot be explained by simulation responses. Finally, an analytical formula based on multi-Gaussian assumption will be used to estimate the posterior S-curve of the business objective. The approach has been successfully applied in a Brugge waterflood benchmark study, in which the first 2 years of production data (rate and BHP) were used to update the S-curve of the estimated ultimate recovery. Through the study we observed several key advantages of the proposed method. Compared with traditional history matching methods, our proposed method focuses on the data-objective function relationship and thus circumvents the need to update the model parameters and states. The ensemble of simulations can be pre-computed and no additional simulation is needed after data arrives. In addition, our proposed method is insensitive to the number of parameters, and does not require the use of a numerical proxy for simulations, which is normally needed with traditional sampling- based methods. As far as we know, the proposed workflow, including the model validation and the de-noising techniques, is novel. The proposed workflow is also general enough to be used in other model-based data interpretation applications.
Summary Data-acquisition programs, such as surveillance and pilots, play an important role in minimizing subsurface risks and improving decision quality for reservoir management. For design optimization and investment justification of these programs, it is crucial to be able to quantify the expected uncertainty reduction and the value of information (VOI) attainable from a given design. This problem is challenging because the data from the acquisition program are uncertain at the time of the analysis. In this paper, a method called ensemble-variance analysis (EVA) is proposed. Derived from a multivariate Gaussian assumption between the observation data and the objective function, the EVA method quantifies the expected uncertainty reduction from covariance information that is estimated from an ensemble of simulations. The result of EVA can then be used with a decision tree to quantify the VOI of a given data-acquisition program. The proposed method has several novel features compared with existing methods. First, the EVA method directly considers the data/objective-function relationship. Therefore, it can handle nonlinear forward models and an arbitrary number of parameters. Second, for cases when the multivariate Gaussian assumption between the data and objective function does not hold, the EVA method still provides a lower bound on expected uncertainty reduction, which can be useful in providing a conservative estimate of the surveillance/pilot performance. Finally, EVA also provides an estimate of the shift in the mean of the objective-function distribution, which is crucial for VOI calculation. In this paper, the EVA work flow for expected-uncertainty-reduction quantification is described. The result from EVA is benchmarked with recently proposed rigorous sampling methods, and the capacity of the method for VOI quantification is demonstrated for a pilot-analysis problem using a field-scale reservoir model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.