The multiscale, multimesh flow simulators have been designed for the sole purpose of running very large, heterogeneous reservoir, flow problems on massively parallel computers. This paper shows the flow simulation results and the corresponding CPU times. The multiscale flow simulator is written in Fortran 90/95 with OpenMP directives and compiled on high-performance SMP computers. The simulations were performed for several highly heterogeneous, channelized, reservoir cases with realistic rock-fluid interaction (viscous, capillarity, gravity, and compressibility) to evaluate the efficacy of the multiscale, multimesh simulation in parallel computing. It is shown that the aforementioned multiscale technique reduces computing time by several orders of magnitude, while maintaining the same accuracy as the conventional fine scale simulation.
Introduction
In simulation of displacement processes in large heterogeneous reservoirs, the computing time is both time-consuming and expensive. Therefore there has been a tendency to upscale fine-grid models to reduce the required CPU time. The problem with upscaling is that it often creates inaccuracy in mathematical results (e.g. large numerical dispersion). Upscaling also cannot capture the architecture of the flow channels effectively. Thus, the channeling effects are suppressed. Finally the upscaling algorithm usually does not have a solid physical foundation. For instance, permeability upscaling has been handled through a logical flow averaging technique; however, upscaling of relative permeability curves has not been very well developed1–4. As a consequence, to minimize the upscaling issues, we have resorted to a multimesh, multiscale computing methodology5–6 to preserve the reservoir flow and transport characteristics at the very fine-level, while we reduced the inherent computing time by several orders of magnitude.
The multiscale computation was reported by several authors previously.7–17 We also presented an extension of the multiscale method for both single- and dual-porosity reservoirs in a previous meeting.5–6 Since then, we have been able to improve our computing methodology, which is the subject of this paper.
The multiscale, multimesh simulator was compiled for a 64-bit, SGI-ALTIX with 256 1.5 GHz Itanium2 CPUs. However, for the purpose of this study, we limit our usage with a maximum of 32-CPUs.
Computing Methodology
We solve the steady-state pressure equation on the global fine-grid mesh, to obtain the flux distribution at the coarse-grid boundaries. These flux distributions are used as the weighting function for the local pressure update instead of the transmissibility weighting used in our previous.5–6
We also use the above steady-state fine-grid flux distribution at the boundaries of the coarse-grid to calculate the effective permeability tensor of the coarse-grid. This upscaling approach is different than the classical flow-based permeability upscaling which is based on constant pressure at the boundaries. The latter approach is also equivalent to having a fixed pressure gradient across the coarse-grid domain.
The computation sequence:Obtain global fine-scale steady-state pressure solution to calculate the fine-scale flux-weights at the boundaries of each coarse-scale nodes. This information will be used to calculate the fine-scale fluxes within each each coarse-scale nodes. For computational efficiency, for very large grid systems, we use block Jacobi iteration algorithm. For parallel processing purposes, block Jacobi iteration can be done by a red-black ordering scheme.Obtain global unsteady-state coarse-scale pressure solution at large time-step, ?t1, to calculate the coarse-scale fluxes.Calculate the the unsteady-state fine-scale fluxes at the coarse grid boundaries using the coarse-scale fluxes and weighted by Step a.Calculate the fine-scale pressures and internal interface fine-scale boundaries within each coarse-scale gridblocks using the boundary conditions obtained in Step c.Calculate fine-grid saturations using smaller time-steps, ?t2, constrained by the CFL criterion for the IMPES or sequential approach.