A methodology to construct deep neural network- (DNN) and recurrent neural network- (RNN) based proxy flow models is presented; these can reduce computational time of the flow simulation runs in the routine reservoir engineering workflows, such as history matching or optimization. A comparison of these two techniques shows that the DNN model generates predictions more quickly, but the RNN model provides better quality. In addition, RNN-based proxy flow models can make predictions for times after those included in the training data set. Both approaches can reduce computational time by a factor of up to 100 in comparison to the full-physics flow simulator. An example of the proxy flow model application is successfully demonstrated in an exhaustive search history matching exercise. All developments are shown on a synthesized Brugge petroleum reservoir.
Advances in the fields of information technology, computation, and predictive analytics have permeated the energy industry and are reshaping methods for exploration, development, and production. These technologies can be applied to subsurface data to reliably predict a host of properties where only few are available. Among the numerous sources of subsurface data, rock and fluid analysis stand out as the means of directly measuring subsurface properties. The challenge in this work is to maximize information gain from legacy pdf reports and unstructured data tables that represented over 70 years of laboratory work and investment. The implication of modeling this data into an organized data store means better assessment of economic viability and producibility in frontier basins and the capability to identify bypassed pay in old wells that may not have rock material. This paper presents innovative and agile technologies that integrate data management, data quality assessment, and predictive machine learning to maximize the company asset value using underutilized legacy core data. The developed machine learning algorithms identify potential outliers, benchmark the valuable data against current industry standards, increase the confidence in data quality and avoid amplifying error in predicting reservoir properties. The workflow presented in the paper is expected to reduce uncertainties in subsurface studies caused by limited core data, improper analog selection, high cost, limited time for acquiring new cores, and long delivery times of core analysis data. The workflow reduces the requirement for subsurface formation evaluation rework as new data becomes available at later project stages resulting in optimized field development. The workflow enhanced by machine learning also improves the prediction and propagation of reservoir properties to uncored borehole sections. In conclusion, managing legacy core data and transforming it to generate new subsurface insights are critical step to establish a reliable database in support of business excellence and the digitalization journey. Innovative machine learning tools continue to unlock new values from legacy core data that significantly impact the entire reservoir life cycle including reserves booking, production forecasting, well placement, and completion design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.