High resolution reservoir modeling is necessary to analyze complex flow phenomena in reservoirs. As more powerful computing platforms become available, reservoir simulation engineers are building larger and higher resolution reservoir models to study giant fields. Large number of simulations is necessary to validate the model and to lower uncertainty in prediction results. It is a challenge to accurately model complex processes in the reservoir and also efficiently solve high resolution giant reservoir models on rapidly evolving hardware and software platforms. There are many challenges to get high performance in reservoir simulations on constantly evolving computing platforms. There are constraints which limit performance of large scale reservoir simulation. In this study, we review some of these constraints and show their effects on practical reservoir simulation. We review emerging computing platforms and highlight opportunities and challenges for reservoir simulation on those platforms. It is anticipated that management of data locality by the simulator will become very important on emerging computing platforms and there will be needs to manage locality to achieve good performance. Heterogeneity in the computing platform will make it difficult to get good performance without adoption of a hybrid parallelization style in the simulator. In this study, we analyze many benchmark results to illustrate challenges in high performance computations of reservoir simulation on current and emerging computing platforms.
Simulation of high resolution reservoir models is useful to gain insight into oil and gas reservoirs. Nowadays, massive comprehensive reservoir simulation models can be built with detailed geological and well log data. These models require a very large high performance computing (HPC) platform for conducting reservoir simulation. Saudi Aramco has developed a state-of-the-art simulator, GigaPOWERS™, which is capable of simulating multibillion cell reservoir models. This paper provides an overview of challenges related to constructing, HPCs and visualizing the simulation output of giant reservoir models, and how the computational platform at Saudi Aramco is designed to overcome these challenges. A large HPC platform can be designed for reservoir simulation by connecting multiple Linux clusters in a simulation grid. Such an environment can provide the necessary capacity and computational power to solve multi-billion cell reservoir models. Such a simulation grid for reservoir simulation has been designed in the Saudi Aramco EXPEC Computer Center. In this study, we provide the benchmark results of multiple giant fields to evaluate the performance of the Saudi Aramco simulation grid for reservoir simulation. Communication and I/O routines in the simulator may add a considerable overhead in computation on such a computing platform. Connectivity between clusters on our simulation grid is tuned to maintain a high level of scalability in simulation. Excellent scalability results have been obtained for computations of giant simulation models on the simulation grid. Simulation models of the order of one billion cells pose a challenge to pre- and post-processing applications to load and process data in a reasonable time. Remote visualization, Level of Detail and Load on Demand algorithms were implemented in these applications and data formats were revised to efficiently process and visualize massive simulation models.
Streamlines provide snapshots of flow patterns in the field, which simulation engineers can use to shorten the history match cycle when validating reservoir models. They can also help engineers develop injection strategies and improve the sweep efficiency by analyzing flow patterns and by estimating injector to producer relationships in the field during computations of various prediction scenarios. Our strategy in this study is to use details of reservoir conditions obtained by traditional reservoir simulations and calculate streamlines from the computed flow field. Saudi Aramco's in-house developed simulator POWERS and its post processing environment have been enhanced to generate streamlines from computed reservoir flow fields. A streamline tracing software developed at Texas A&M University and POWERS has been customized to generate streamline outputs. The algorithm used for computations is parallelized to run on Linux clusters. Several features, such as computations of injector/producer allocation factor and injector efficiencies, have been implemented in our streamline generation tool kit. In this paper, we present two case studies to illustrate advantages of streamlines when used in a workflow along with reservoir simulations to improve water flow managements of the field. Our analysis resulted in an optimal design with a fewer number of injectors to maintain the production target. In addition, streamline analysis helped us to find optimal well locations for injectors.
Assisted history matching (AHM) methodologies provide a systematic approach to history match reservoir models accounting for uncertainties. It also provides sensitivity of reservoir response within the uncertainty range of parameters. There are usually large degrees of uncertainties in a simulation model, and as the simulation model becomes very large, both engineering and computational complexities associated with AHM methodologies become massive. The performance an AHM algorithm depends on its ability to provide a solution with an acceptable level of accuracy and uncertainty tolerance and computational efficiency to reach that goal. This study provides performance evaluation guidelines for AHM studies and a cost benefit metrics for feasible history matching studies of giant simulation models. These metrics will take into consideration several criteria, such as the quality of the simulation model, the requirement for compute and storage resources, time to converge to an optimal or acceptable simulation model, user friendliness and ease of integration of the tool in an existing simulation environment. The goal of this evaluation metrics is to assist reservoir engineers to identify the best class of tools and algorithms, which will be appropriate for history matching studies of simulation models. The evaluation matrices were used to evaluate two stochastic tools. One is utilizing genetic/evolutionary algorithms and the other one is using different global statistical algorithms. The study is performed using an oil field in Saudi Arabia. This study identified key strengths and shortcomings of these two classes of algorithms for large scale history matching studies. The paper demonstrates that the current metrics can serve as a suitable screening tool to identify an appropriate methodology to be used in a history matching study.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.