Probabilistic modelling is one of the most frequently used methods in reservoir simulation to manage uncertainties and assess their impact on reservoir behavior/cumulative production. However, depending on the extent of the uncertainty, 100s of scenarios can be generated leaving engineers unable to meaningfully analyze this data. To remedy this an unsupervised machine learning based workflow was developed to identify unique scenarios which was then paired with an integrated dashboard to enable rapid and deep analysis. A case study was done using data from a Shell operated gas field in the North Sea. Data was first mined from 480 history matched scenarios using python; out of which 20 unique clusters were identified through K-Means clustering of pressure and saturation changes with time in each gridblock. This meant that the team had to look only at 20 scenarios instead of 480 to understand the effect of different inputs on pressure and saturation response. For enhanced analysis, an integrated visualisation dashboard was created to visualize pressure and saturation changes, production profiles and connect them back to input parameters The new methodology enabled the team to integrate different aspects of reservoir modelling from static to dynamic to surface constraints on a single dashboard, making it possible to find patterns in large volumes of data which was previously not possible. For example, a cluster was identified which had high water movement; upon inspection of input parameters it was seen that late life recovery was significantly different in this cluster as compared to others. Being able to visualize different properties of multiple scenarios simultaneously at both group and grid level is a very powerful tool that not only generates insights but significantly reduces analysis time and helps in quality checking property modelling and grid behavior. The developed workflow is quite generic in nature, capable of working with various simulators and can be extended to assessing history match quality in Assisted History Matching (AHM) and multi-scenario modelling. Key parameters impacting different scenarios were identified and the team observed 10x reduction in time and significant reduction in manpower requirements through the new approach
Short Term Injection Re-distribution (STIR) is a python based real-time WaterFlood optimization technique for brownfield assets that uses advanced data analytics. The objective of this technique is to generate recommendations for injection water re-distribution to maximize oil production at the facility level. Even though this is a data driven technique, it is tightly bounded by Petroleum Engineering principles such as material balance etc. The workflow integrates and analyse short term data (last 3-6 months) at reservoir, wells and facility level. STIR workflow is divided into three modules: Injector-producer connectivity Injector efficiency Injection water optimization First module uses four major data types to estimate the connectivity between each injector-producer pair in the reservoir: Producers data (pressure, WC, GOR, salinity) Faults presence Subsurface distance Perforation similarity – layers and kh Second module uses connectivity and watercut data to establish the injector efficiency. Higher efficiency injectors contribute most to production while poor efficiency injectors contribute to water recycling. Third module has a mathematical optimizer to maximize the oil production by re-distributing the injection water amongst injectors while honoring the constraints at each node (well, facility etc.) of the production system. The STIR workflow has been applied to 6 reservoirs across different assets and an annual increase of 3-7% in oil production is predicted. Each recommendation is verified using an independent source of data and hence, the generated recommendations align very well with the reservoir understanding. The benefits of this technique can be seen in 3-6 months of implementation in terms of increased oil production and better support (pressure increase) to low watercut producers. The inherent flexibility in the workflow allows for easy replication in any Waterflooded Reservoir and works best when the injector well count in the reservoir is relatively high. Geological features are well represented in the workflow which is one of the unique functionalities of this technique. This method also generates producers bean-up and injector stimulation candidates opportunities. This low cost (no CAPEX) technique offers the advantages of conventional petroleum engineering techniques and Data driven approach. This technique provides a great alternative for WaterFlood management in brownfield where performing a reliable conventional analysis is challenging or at times impossible. STIR can be implemented in a reservoir from scratch in 3-6 weeks timeframe.
Capacitance resistance modeling (CRM) is a data-driven analytical technique for waterflood optimization developed in the early 2000s. The popular implementation uses only production/injection data as input and makes simplifying assumptions of pressure maintenance and injection being the primary driver of production. While these assumptions make CRM a quick plug & play type of technique that can easily be replicated between assets they also lead to major pitfalls, as these assumptions are often invalid. This study explores these pitfalls and discusses workarounds and mitigations to improve the reliability of CRM. CRM was used as a waterflood optimization technique for 3 onshore oil fields, each having 100s of active wells, multiple stacked reservoirs, and over 15 years of pattern waterflood development. The CRM algorithm was implemented in Python and consists of 4 modules: 1) Connectivity solver module – where connectivity between injectors and producers is quantified using a 2 year history match period, 2) Fractional Flow solver module – where oil rates are established as a function of injection rates, 3) Verification module – which is a blind test to assess history match quality, 4) Waterflood optimizer module – which redistributes water between injectors, subject to facility constraints and estimates potential oil gain. Additionally, CRM results were interpreted and validated using an integrated visualization dashboard. The two main issues encountered while using CRM in this study are 1) poor history match (HM) and 2) very high run time in the order of tens of hours due to the large number of wells. Poor HM was attributed to significant noise in the production data, aquifer support contributing to production, well interventions such as water shut-offs, re-perforation, etc. contributing to oil production. These issues were mitigated, and HM was improved using data cleaning techniques such as smoothening, outlier removal, and the usage of pseudo aquifer injectors for material balance. However, these techniques are not foolproof due to the nature of CRM which relies only on trends between producers and injectors for waterflood optimization. Runtime however was reduced to a couple of hours by breaking up the reservoir into sectors and using parallelization.
Normal Move-Out (NMO) velocity pick editing is the segregation of good and bad picks from an unsupervised auto-picking algorithm. As not all these picks are correct, manual velocity editing is required. This is time consuming, repetitive and typically requires a seismic expert for days to weeks. Automating it would require an algorithm that mimics the domain knowledge and expertise of a seismic processor; a deterministic approach would therefore likely fail. Alternatively, we propose a machine learning algorithm to identify valid time-velocity picks. The proposed approach is a supervised classification approach which utilizes human interpreted velocity picks (1-5% of all picks) as training data. The algorithm learns to recognize the features of a valid velocity pick from metadata such as semblance energy, depth, areal location etc. and utilizes said understanding to segregate valid picks from invalid ones (multiples etc.) amongst the remaining velocity picks. The algorithm has been trained using synthetic NMO picks created by finite-difference forward modelling CMP data, including multiples, in the Marmousi model and auto-picking the move-out. The ground-truth NMO picks were created directly from the velocity model. The trained classification neural network shows a very high > 97% accuracy on segregation of valid and invalid NMO velocity picks based on a 5% input data set. Further reduction of the training data set to 1% of velocity picks reduces test accuracy only by an additional 2 percentage points. Training and execution time of the neural network on a dataset of ~ 40000 velocity picks are also extremely fast (< 5 mins). Initial results on RMO picks also show a very similar performance characteristic. The metadata for all valid picks spans a multi-dimensional feature space, from which the neural network constructs a non-linear selection criterion. A human can either manually QC each pick or perform attribute-based selection using only lower dimensional linear selection criteria. The robustness and speed of the neural network outperforms the manual editing while also reducing cycle time; the resulting velocity models will be superior, leading to improved signal processing and imaging results further in the processing sequence. Automating velocity picking and editing has been a research objective for many years now, but only since the availability of modern computation and optimization algorithms can we properly deploy this to augment the high-quality modern velocity picking software and significantly decrease turn-around time by automating the picking and QC process.
A key to successful Well, Reservoir and Facilities Management (WRFM) is to have an up-to-date opportunity funnel. In large mature fields, WRFM opportunity identification is heavily dependent on effective exploitation of measured & interpreted data. This paper presents a suite of data driven workflows, collectively called WRFM Opportunity Finder (WOF), that generates ranked list of opportunities across the WRFM opportunity spectrum. The WOF was developed for a mature waterflooded asset with over 500 active wells and over 30 years of production history. The first step included data collection and cleanup using python routines and its integration into an interactive visualization dashboard. The WOF used this data to generate ranked list of following opportunity types: (a) Bean-up/bean-down candidates (b) Watershut-off candidates (c) Add-perf candidates (d) PLT/ILT data gathering candidates, and (e) well stimulation candidates. The WOF algorithms, implemented using python, largely comprised of rule-based workflows with occasional use of machine learning in intermediate steps. In a large mature asset, field/reservoir/well reviews are typically conducted area by area or reservoir by reservoir and is therefore a slow process. It is challenging to have an updated holistic overview of opportunities across the field which can allow prioritization of optimal opportunities. Though the opportunity screening logic may be linked to clear physics-based rules, its maturation is often difficult as it requires processing and integration of large volumes of multi-disciplinary data through laborious manual review processes. The WOF addressed these issues by leveraging data processing algorithms that gathered data directly from databases and applied customized data processing routines. This led to reduction in data preparation and integration time by 90%. The WOF used workflows linked to petroleum engineering principles to arrive at ranked lists of opportunities with a potential to add 1-2% increment in oil production. The integrated visualization dashboard allowed quick and transparent validation of the identified opportunities and their ranking basis using a variety of independent checks. The results from WOF will inform a range of business delivery elements such as workover & data gathering plan, exception-based-surveillance and facilities debottlenecking plan. WOF exploits the best of both worlds - physics-based solutions and data driven techniques. It offers transparent logic which are scalable and replicable to a variety of settings and hence has an edge over pure machine learning approaches. The WOF accelerates identification of low capex/no-capex opportunities using existing data. It promotes maximization of returns on already made investments and hence lends resilience to business in the low oil price environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.