We describe a multistage parallel linear solver framework developed as part of the Intersect (IX) next-generation reservoir simulation project. The object-oriented framework allows for wide flexibility in the number of stages, methods and preconditioners. Here, we describe the specific components of a two-stage CPR[1] (Constraint Pressure Residual) scheme designed for large-scale parallel, structured and unstructured linear systems. We developed a highly efficient in-house Parallel Algebraic Multigrid (PAMG) solver as the first stage preconditioner. For the second stage, we use a parallel ILU-type scheme. This new and powerful combination of CPR and PAMG was the result of detailed analysis of the linear system of equations associated with reservoir simulation. Using several difficult reservoir simulation problems, we demonstrate the robustness and excellent parallel scalability of the IX linear solver. For the field case studies, the IX linear solver with CPR and PAMG is at least five times faster than an established and widely used industrial linear solver. The performance advantage of the IX linear solver over traditional reservoir simulation linear solvers increases with both problem size and the number of processors. Introduction Different types of grid may be used for reservoir flow simulation to model geometrically complex, highly detailed models and/or deviated or multi-lateral wells[2]. Grid types are often labeled based on their structure. Examples of simulation grids include:structured Cartesian,structured stratigraphic,multi-block stratigraphic,PEBI (Perpendicular Bisector), andgenerally unstructured. Hybrid grids that combine various types can also be used. It is now widely recognized that complete flexibility in representing complex and highly detailed simulation models can be achieved using generally unstructured grids[3].In recent years, significant efforts have focused on building multi-purpose reservoir flow simulators that can deal with geometrically complex and highly detailed structured and unstructured reservoir models[4,5,6]. These relatively large-scale efforts are being pursued because, for nearly three decades, the reservoir simulation community has focused on building robust and efficient reservoir simulators for structured grid problems. Today, the ability to routinely simulate a wide spectrum of practical black-oil problems on (effectively) structured models with O(105) gridblocks is widespread. However, the performance of traditional reservoir simulators typically deteriorates significantly with problem size and the number of processors. This is because the algorithms and software implementations were not designed for scalable, parallel computation. A scalable algorithm is one whose computational complexity (i.e., the number of operations to reach solution) is proportional to the number of unknowns; moreover, the algorithm should also have a convergence rate that is independent of problem size or the number of processors. In numerical solution algorithms, there is often a tradeoff between convergence rate and degree of parallelism. As a result, to obtain a useful measure of parallel efficiency, the best scalar (uniprocessor) algorithm should be used as reference. Scalable methods are needed because the size of problems of interest continues to grow quite significantly, and we want to avoid methods with computational complexities of O(Na) with an athat is (much) larger than unity.
TX 75083-3836, U.S.A., fax 01-972-952-9435. AbstractWe describe a multistage parallel linear solver framework developed as part of the Intersect (IX) next-generation reservoir simulation project. The object-oriented framework allows for wide flexibility in the number of stages, methods and preconditioners. Here, we describe the specific components of a two-stage CPR 1 (Constraint Pressure Residual) scheme designed for large-scale parallel, structured and unstructured linear systems. We developed a highly efficient in-house Parallel Algebraic Multigrid (PAMG) solver as the first stage preconditioner. For the second stage, we use a parallel ILU-type scheme. This new and powerful combination of CPR and PAMG was the result of detailed analysis of the linear system of equations associated with reservoir simulation.Using several difficult reservoir simulation problems, we demonstrate the robustness and excellent parallel scalability of the IX linear solver. For the field case studies, the IX linear solver with CPR and PAMG is at least five times faster than an established and widely used industrial linear solver. The performance advantage of the IX linear solver over traditional reservoir simulation linear solvers increases with both problem size and the number of processors.
This paper was selected for presentation by an SPE Program Committee following review of information contained in a proposal submitted by the author(s). Contents of the paper, as presented, have not been reviewed by the Society of Petroleum Engineers and are subject to correction by the author(s). The material, as presented, does not necessarily reflect any position of the Society of Petroleum Engineers, its officers, or members. Papers presented at SPE meetings are subject to publication review by Editorial Committees of the Society of Petroleum Engineers. Electronic reproduction, distribution, or storage of any part of this paper for commercial purposes without the written consent of the Society of Petroleum Engineers is prohibited. Permission to reproduce in print is restricted to a proposal of not more than 300 words; illustrations may not be copied. The proposal must contain conspicuous acknowledgment of where and by whom the paper was presented. Write Librarian, SPE,
This paper describes the algorithms and implementation of a parallel reservoir simulator designed for, but not limited to, distributed-memory computational platforms that can solve previously prohibitive problems efficiently. The parallel simulator inherits the multipurpose features of the in-house sequential simulator, which is at the core of the new capability. As a result, black-oil, miscible, compositional, and thermal problems can be solved efficiently using this new simulator. A multilevel domain decomposition approach is used. First, the original reservoir is decomposed into several domains, each of which is given to a separate processing node. All nodes then execute computations in parallel, each node on its associated subdomain. The parallel computations include initialization, coefficient generation, linear solution on the sub-domain, and input/output. To enhance the convergence rate, we solve a coarse global problem which is generated via a multigrid-like coarsening procedure. This solution serves as a preconditioner of an outer parallel GMRES loop. The exchange of information across the subdomains, or processors, is achieved using the message passing interface standard, MPI. The use of MPI ensures portability across different computing platforms ranging from massively parallel machines to clusters of workstations. Results indicate the simulator exhibits excellent scalability of the simulator for up to 32 processors on the IBM SP2 system. Scalability results are also presented for a cluster of IBM workstations connected via an ATM (Asynchronous Transfer Mode) communication. The use of ATM for interprocessor communication was found to have a small, but measurable, impact on scaling performance. Introduction The predictive capacity of a reservoir simulator depends first on the quality of the information used, and then it depends on the ability of the computational grid and solution method to describe the flow behavior accurately. The injection of more detail into reservoir description is producing very large models. Scale up technology can be applied to reduce the overall size of the models while preserving the important details of the flow. For large-scale reservoir displacements, the scaled up model itself could consist of millions of gridblocks Flow simulation using models of that size is beyond the current capability of uniprocessor, or even shared-memory multiprocessor, compute platforms. In this work, we describe the development of a parallel multi-purpose reservoir simulator that can solve previously prohibitive problems efficiently. In addition, the parallel simulator provides the means to validate and help improve the scaled-up model by comparing its flow predictions with detailed simulations using the original finer scale description from which it is derived. In the following sections, we give a brief overview of the parallel computing landscape and we adopt a definition for scalability. That is followed by a description of our parallel simulation development strategy and implementation details. Performance results for the parallel simulator are then presented and analyzed. We close with key conclusions. Background The development of application codes for distributed-memory parallel platforms has been, until recently, a high-risk investment, both in terms of capital and manpower. This high-risk environment was due, in large part, to (1) an unstable landscape of parallel computing vendors/machines and (2) a lack of software portability across the various platforms. The focus had been on massively-parallel machines with proprietary architectures that link hundreds, or even thousands, of specially designed processors. Because the processing nodes in these machines tended to have limited computing power and small local memory, massive parallelism, both in terms of total memory and compute power, was achieved by employing thousands of such processors. P. 17^
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.