DISCLAIMERPortions of this document may be illegible in electronic image products. Images are produced from the best available original document.Abstract. We present a model for the parallel performance of algorithms that consist of concurrent, twodimensional wavefronts implemented in a message passing environment. The model combines the separate contributions of computation and communication wavefronts. We validate the model on three important supercomputer systems, on up to 500 processors. We use data from a deterministic particle transport application taken from the ASCI workload, although the model is general to any wavefront algorithm implemented on a 2-D processor domain. We also use the validated model to make estimates of performance and scalability of wavefront algorithms on 100-TFLOPS computer systems expected to be in existence within the next decade as part of the ASCI program and elsewhere. On such machines our analysis shows that, contrary to conventional wisdom, interprocessor communication performance is not the bottleneck. Single-node efficiency is the dominant factor.
This paper (SPE 51969) was revised for publication from paper SPE 37975, first presented at the 1997 SPE Reservoir Simulation Symposium, Dallas, 8-11 June. Original manuscript received for review 30 June 1997. Revised manuscript received 30 March 1998. Paper peer approved 6 July 1998.
Summary
We describe a new production model, Falcon, that has achieved speeds on parallel computers that are 100 times faster on real world problems than current production models on a vector computer. Falcon has been used to conduct the largest, geostatistical reservoir study ever conducted within Amoco. In this paper we discuss the following: Falcon's data parallel paradigm with FORTRAN 90 and high performance FORTRAN (HPF); its single program, multiple data (SPMD) paradigm with message passing; efficient memory management that enables simulation of enormous studies; a numerical formulation that reconciles the generalized compositional approach (based on component masses and pressure) with earlier approaches (based on pressures and saturations), in a more general and more efficient approach. We also discuss Falcon's scalability up to 512 processor nodes and performance (timings and memory) achieved on a number of parallel platforms, including Cray Research's T3D and T3E, SGI's Power Challenge and Origin 2000, Thinking Machines' CM5, and IBM's SP2. Falcon also runs on single processor computers such as PC's and IBM's RS6000. We discuss a new parallel linear solver technology based on a fully parallel scalable implementation of incomplete lower-upper (ILU) preconditioning coupled with a GMRES or Orthomin iteration process. This naturally ordered global ILU preconditioner is scalable to hundreds of processors, efficiently solving the matrix problems arising from large scale simulations.
The use of the techniques described in this paper has enabled us to run problem sizes of up to 16.5 million gridblocks. Falcon was used to simulate fifty geostatistically derived realizations of a large, black oil waterflood system. The realizations, each with 2.3 million cells and 1,039 wells, took an average of 4.2 hours to execute on a 128-node CM5 computer, thus enabling the simulation study to finish in less than a month. In this field study, we bypassed upscaling through the use of fine vertical resolution gridding.
Our focus has been on the applicability of Falcon to real world problems. Falcon can be used for modeling both small and very large reservoirs, including reservoirs characterized by geostatistics. It can be used to simulate black oil, gas/water, and dry gas reservoirs. And, a fully compositional feature is being developed.
P. 400
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.