Automatic updates of simulation models with historical field performance and events is a challenging and time-consuming task that reservoir engineers need to tackle; whether it is to maintain history matched reservoir models (evergreen assets), undertake new calibration exercise or update forecasting studies. The challenge takes another dimension with increasing complexity of field operations (production/injection/drilling/workover), and well designs and configuration of downhole equipment. This paper presents an efficient workflow capitalizing on IR4.0 Digital Twin principles to automate the process of seamlessly integrating and updating historical wells’ information in reservoir simulation models. The objective of this workflow is to drive reservoir simulation towards capitalizing on digital transformation and the Live Earth models concept to revolutionize model calibration and history matching for superior quality of prediction with great confidence. Well data digitization in this workflow was achieved through automating well data acquisition, well data quality checking enforcement and well modeling in interconnected simulation applications. The workflow minimizes human manual interaction with data giving engineers the chance to focus more on reservoir engineering aspects of reservoir engineering tasks. The workflow consists of four steps. The first step is data acquisition in which various types of well data are fetched. The second step is data quality check in which data from different data sources is subjected to engineering and scientific measures (i.e. Quality Indices) that translate engineering knowledge and experience to detect possible data inconsistencies. The third and fourth steps cover exporting and importing relevant data within the reservoir simulation applications’ portfolio where various data types are handled and managed seamlessly. Data and event acquisition workflows were automated to provide seamless well data transfer between different data sources and reservoir simulation pre and post-processing applications. The different types of well data were obtained through automatic fetching from data repository (databases, petrophysical models … etc.). The Quality Check (QC) procedures were automatically performed against deviation surveys, perforations, casing/tubing, flowmeter, cores, formation tops and productivity/injectivity index. This helped in identifying data discrepancy, if any, including missing data entries and contradicting well events. The automation of these workflows significantly reduced the time needed for well data transmission/update to the reservoir models, eliminated human errors associated with data entry or corrections, and helped keeping the models up-to-date (evergreen). Incorporating the digital twin concepts enabled advanced automatic digitization of well information. It provided a data exchange solution that meets E&P requirements and provided more effective and efficient methods of connecting diverse applications and data repositories.
The volumes of data generated by modern reservoir simulators can be huge. This leads to problems with CPU time and memory limitations when presenting the results for analysis. The issue is not just with handling the large arrays for massive 3D grids but also for the well and completion vectors, which can consume many gigabytes of disk space. The problem is compounded by the trend for many simulations of the same reservoir, whether for history matching or to explore multiple development scenarios. A further issue is with data display. When the number of runs, wells, completions and timesteps gets large, it can be difficult for an engineer to assimilate the information — especially when comparing bulk simulation results with sporadic historical measurements. This paper describes techniques to overcome these problems. We concentrate on well-related results such as rates and totals, and grid values in completed cells. These include recasting the results into a form suitable for direct access on disk, a load on demand architecture and lightweight in memory representations. In addition user interface ideas that concentrate information are presented.
This paper provides an overview of an advanced visualization system for visualizing giga-cell simulation models to assist Reservoir Engineers in managing and developing Saudi Arabia’s giant fields. The challenges in the hydrocarbon industry require the use of the latest technology to maximize recovery in a cost-effective manner. Over the last few years, reservoir simulation activities have undergone major transformation toward the use of giga-cell simulation models [1]. This transformation stems from the use of advanced data acquisition solutions and advanced modeling packages to capture and model physically large fields. Fine scale-grid blocks capture detailed description of geologic heterogeneity that is necessary to model the complexity of fluid flow in the subsurface. Giga-cell simulation models are currently generated using Saudi Aramco’s GigaPOWERS parallel reservoir simulator [2]. The size of reservoir simulation models continues to grow to exceed the billion-cell barrier. The design of a high performance computational platform for simulation of giant reservoir models has been discussed in other literature [3]. In this paper, we focus on the simulation post-processing visualization system developed, which consists of advanced technologies for handling and visualizing massive amounts of data. Implemented techniques include: Remote visualization utilizing graphics processing clusters, parallel data loading, level of detail graphics rendering and hierarchal multi-resolution data structure. In addition, a number of advanced visualization features that have been implemented are highlighted in this paper, such as the ability to display fluid patterns in the field by utilizing streamlines and vector fields, advanced volume visualization techniques based on streamlines and wells filtering to help engineers better understand the fluid flow in subsurface reservoirs. Furthermore, the performance considerations, challenges and impact on reservoir simulation studies are also discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.