Decision making in the oilfield is a crucial process in petroleum oilfield activities where numerous attributes and uncertainties exist in the complete process. In the development process of new fields, a well-organized historical database to minimize uncertainty is important as it decreases risks, provides better insight and robustness in decision making. Statistics is a strong tool to turn information or data into knowledge when used with care and physical understanding of the cause-effect relation between the attributes and the outcome. Unfortunately, historical data and learnings from the past cannot be used in an efficient way in oilfield decisions due to lack of systematically organized historical data where there is a huge potential of turning terrabytes of data into knowledge and understanding, for more successful decisions and results. On the other hand, using historical data on reservoir enginnering studies are generally very complex since it requires integration of vast amount data from several diciplines that have different data sources and from different scales, which is sometimes leading modelers to prefer new data collections from the field rather than using complex historical data on hand. In this sense, a multi-attribute based statistical model for each reservoir will greatly enhance the outcomes of future actions through bringing a statistical understanding to a physically more complex relation between the causes and effects or in other words, attributes and results. The data driven models are established for the desired phenomena by means of collecting relevant historical data, after which a multivariate regression is carried out to show the significance of each attribute in the model. These statistical models are then used to make future decisions in the same reservoir in such a way that attributes can be selected within the optimum ranges that yield best results and outcome. Attributes can consist of events and key parameters that influence the outcome. The model is illustrated to investigate the factor affecting the performance of vertical and horizontal wells in tight reservoirs. The data-driven model is validated with the numerical reservoir simulation model and used to determine the significance of each parameter and the optimum operating intervals.
Optimization has become a practical component in decision-making processes in field development and reservoir management. Although optimization simplifies decision-making, it harnesses complex equations and formulations that may be computationally expensive to solve. Numerical reservoir simulation adds another dimension to this phenomenon when combined with optimization software to find the optimum defined by an objective function. Considering the fact that current reservoir simulation models are constructed with vast amount of data and if it is coupled with optimization software, computational limits of regular computers can cause not being able to reach the aimed result although the recent technological development allows running huge reservoir models with parallel computing systems. Consequently, it is inevitable to achieve robust and faster results in optimization problems. Predefined objective functions in optimization software when combined with numerical reservoir simulators attempt to maximize the net present value or cumulative oil recovery defined with an objective function, where the objective function can be defined to be multi-objective leading to Pareto sets consisting of trade-offs between objectives. Using an optimization algorithm with predefined objective functions does not provide the flexibility to the physical reservoir fluid flow phenomenon to "maneuver" throughout the iterations of an optimization process. It is necessary to use a more flexible objective function by introducing conditional statements through procedures. In this study an optimization software is combined with a commercial reservoir simulator. Conditional statements implemented in the simulator as procedures help the software/simulator combination operate under pseudo-dynamic objective functions that lead to speed and robustness through trying sets of combinations of parameters, and thus achieving conditions that lead to highest recovery within the given time frame as defined by the conditional statement for the condition for which the simulation run is performed. The procedures feature enables implementation of codes by using conditional statements that act as piecewise objective functions, maximizing the recovery and taking into account the timeframe or condition they belong. A commercial reservoir simulator is used in this study with conditional statements enhancing production in a given timeframe featuring certain conditions. The optimized recoveries with pseudodynamic objective functions provide an enhanced recovery, as compared to that of an optimization case with predefined constant objective function in the optimization software throughout the iterations of the optimization and simulation process.
Regional aquifers are critical for development of oil/gas fields that need water injection for pressure maintenance. However, modeling of regional aquifer is a difficult task as data acquisition and analyses are primarily focused on hydrocarbon bearing intervals. Also, aquifers are generally regional, whereas the data available is clustered around oil/gas fields with negligible data in the vast areas lying outside the fields of interest. The present study aims at building a regional aquifer model for Abu Dhabi Onshore area integrating all the available data in different scales that may be used in planning future availability of water resources. Most of the oil/gas fields located in Onshore of Abu Dhabi have so far been developed by primary pressure maintenance strategy injecting water obtained from the shallower Late Cretaceous, Paleocene and Eocene aquifers. The main water source reservoirs are Dammam, Umm Er Radhuma and Simsima formations. Despite limited data availability, the study delivered a fit for purpose 3D reservoir model integrating seismic data, regional outcrop analogs and available log and core data. An enormous database containing 4768 wells from 10 most important Fields of onshore Abu Dhabi was used to build the structural framework with the help of regional seismic interpreted surface. Sequence stratigraphic cycles were identified in all the formations considering eustatic curves and correlation using GR-NPHI-RHOB logs. Average layer thickness was designed to be around 20ft in all the reservoir intervals. Input for property model was very limited as logs and cores are generally not acquired in aquifer intervals. Wells with NPHI-RHOB logs were used for estimation of porosity and it was calibrated with limited core porosity data available. Permeability has been computed by fitting functions to Poro-Perm cross-plot and then adding statistical dispersion based on observed core permeability data. Different scenarios of permeability were defined due to the high dispersion in the core data. For dynamic modelling three equilibration regions were setup for the main three aquifers and a common PVT table was used. All the water supply and disposal wells were used in history matching. The high case of permeability was found to honor best the production and injection data. The static model built integrating all available data in different scales is robust. Although limited data availability for property modelling was a major concern, the model did show reasonable history match and is fit for purpose to predict future water supply. A systematic data acquisition plan for these aquifers may be implemented in future to make the current model more reliable. The regional aquifer model of Abu Dhabi Onshore is the first of its kind in the country that incorporates all the available information including seismic interpretation in regional scale to small-scale core data.
Maturing giant and super giant fields have, typically, an extensive data set ranging from seismic data to time lapse surveillance data. The data set, associated studies and models together with driving values defined by ADNOC form the foundations of long-term plans, field development plans, business plans, reservoir management plans and production optimization plans. Ensuring the adequacy and optimality of the plans and their capability to meet their prescribed objectives is a very challenging task that requires unique assessment workflows. ADNOC is undertaking multiple fast-tracked Integrated Reservoir Performance and Production Sustainability Assurance (IPR) projects with the above objectives in mind. In this paper we will share the experience gained through the execution of such projects and the way this experience helped refining the workflows and the associated value. We designed and applied unique workflows that combine "Bottom up" approaches by technical discipline with the "Top down" focusing on those factors that have the largest impact on the scope of the plans and the ability to deliver the expected outcomes. The identified issues and opportunities are presented in terms of their impact on volumes in place, reserves, facility, drilling plans, surveillance plans, modeling, etc. and are associated urgency indicators to help prioritizing actions. The adopted integrated methodology and workflows helped in identifying and ranking various issues related to the reservoir models (static and dynamic) and many recommendations on how to tackle these issues in the new generation models were provided. Advanced reservoir management workflows were generated towards optimal production and injection balancing as well as to better manage the water flood and identify the most offending injectors. Many scenarios were explored to check the different elements of the full field development plan and ongoing projects, considering all the identified uncertainties. Many recommendations were provided, accordingly, concerning infill drilling and future gas lift program. Specific workflows were generated to optimize the performance of the existing gas lift wells and to identify and rank the future wells that will need gas lift according to their urgency, hence confirm the gas lift compression capacity that was subject of an ongoing project. Key enabler to complete the project in the planed time frame was the use of cutting-edge modeling technology which has a drastic impact on the project and the team's capability to explore a comprehensive set of scenarios with associated sensitivities and uncertainty analysis providing unique insights towards more optimal decisions and clearer way forward.
Achieving a high-quality history match is critical to understand reservoir uncertainties and perform reliable field-development planning. Classical approaches require large uncertainty studies to be conducted with reservoir-simulation models, and optimization techniques would be applied to reach a configuration where a minimum error is achieved for the history match. Such techniques are computationally heavy, because all reservoir simulations are run in both uncertainty studies and optimization processes. To reduce the computing requirements during the optimization process, we propose to create a robust deep-learning model based on the hidden relationships between the uncertainty parameters and the reservoir-simulation results that can operate as a surrogate model for computationally intensive reservoir-simulation models. In this paper, we present a workflow that combines a deep-learning, machine-learning (ML) model with an optimizer to automate the history-matching process. Initially, the reservoir simulator is run to generate an ensemble of realizations to provide a comprehensive set of data relating the history-matching uncertainty parameters and the associated reservoir-simulation results. This data is used to train a deep-learning model to predict reservoir-simulation results for all wells and relevant properties for history matching from a set of the selected history-matching uncertainty parameters. This deep-learning model is used as a proxy to replace the reservoir-simulation model and to reduce the computational overhead caused by running the reservoir simulator. The optimization solution embeds the trained ML model and aims to deliver a set of uncertainty parameters that minimizes the mismatch between simulation results and historical data. At each optimization iteration, the ML model is used to predict the well-level reservoir-simulation results. At the end of the optimization process, the optimal parameters suggested by the optimizer are then validated by running the reservoir simulator. The proposed work achieves high-quality results by leveraging advanced artificial-intelligence techniques, thus automating and significantly accelerating the history-matching process. The use of uncertainty parameters as input to the deep-learning model, and the model's ability to predict production/injection/pressure profiles for all wells is a unique methodology. Furthermore, the combination of the deep-learning surrogate reservoir model with optimization methods to resolve history-matching problems is advancing the industry's practices on the subject.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.