Summary Enriched gasfloods incorporate a complex interaction of heterogeneity, fingering, multiphase flow, and phase behavior. Experiments and simulations indicate that the optimum solvent enrichment in high-viscosity-ratio secondary gasfloods can be below minimum miscibility enrichment (MME). The compositional path and resulting mobility profile in multidimensional multiple-contact miscible (MCM) or immiscible floods are different from their 1D counterparts for high-viscosity-ratio floods in heterogeneous media. Introduction The objective of this work is to study the effect of phase behavior on bypassing in laboratory gasfloods by combined use of compositional modeling and laboratory computed tomography (CT) scanning. Oil was displaced from a heterogeneous core by several solvents at constant, high viscosity ratio (1,600). Displacement was vertical to avoid gravity override. The bypassing of the oil during the flood was monitored by a vertical CT scanner. A 2D compositional model was used to simulate these displacements and a model three-component system at viscosity ratios of 22 to 200. The experimental data indicate that bypassing decreases as immiscibility increases. The solvent finger moved fastest in the single-phase displacement and slowest in the three-phase displacement. Compositional simulation of these floods was unstable at a 1,600 viscosity ratio. Model system simulation indicates that as viscosity ratio increases, sweep efficiency in first-contact-miscible (FCM) solvents deteriorates sharply. Sweep in near- and below-MME solvents does not decrease as sharply because of multi phase flow. Optimum solvent enrichment in high-viscosity-ratio, secondary gasfloods can be below MME. The compositional path and resulting mobility profile in multidimensional MCM or immiscible floods are different from their 1D counterparts for high-viscosity-ratio floods in heterogeneous media. Blunt et al.'s1 theory of compositional fingering does not work for the heterogeneous medium studied. Background The economics of hydrocarbon solvent flood projects depends on factors that include the enrichment level of the solvent as well as the slug size and WAG ratio. Similarly, the economics of CO2 flood projects depends heavily on the injection pressure. Common industry practice is to use a hydrocarbon solvent at or above its MME2 or to use CO2 at or above its minimum miscibility pressure (MMP).3 In 1D displacements, MME and MMP are the optimum levels of enrichment or pressure, respectively, for the injection solvent. However, reservoir flow is 3D. Rock heterogeneity, viscous fingering, gravity override, diffusion, dispersion, and presence of mobile water may cause optimum enrichment (or pressure) in a reservoir flood to be different from that in a 1D flood, especially in a slim-tube test. Injected solvent composition (or pressure) affects not only the local displacement efficiency (i.e., that evaluated by 1D experiments or calculation) but also the sweep efficiency.4 The mobility ratio and density contrast are large in most solvent floods. Sweep efficiency can be low as a result of fluid channeling, viscous fingering, and gravity override and plays a crucial role in determining the overall recovery efficiency. Simulations of secondary and tertiary solvent floods in several heterogeneous permeability fields have shown that floods with solvent enrichment at or below that required for development of multicontact miscibility in 1D flow can perform as well or better than floods with richer solvents.4 Pande and Orr5 showed, by method-of-characteristics calculations, that the optimum pressure can be lower than MMP in a two-layer reservoir. These results, if valid, are very important to solvent flood economics because they can reduce solvent cost. Such simulations, however, are always open to questions regarding inclusion of all types of crossflow (e.g., capillary and dispersive), realistic spatial permeability variations, history-dependent relative permeability and capillary pressure hysteresis, and numerical dispersion.6 One objective of this work is to conduct solvent floods in a heterogeneous rock to determine whether optimum enrichment levels can be lower than MME at laboratory scale. This would validate the simulation results reported by Pande4 and Pande and Orr.5 Generally speaking, as enrichment (or pressure) increases, microscopic displacement efficiency increases before leveling off at MME (or MMP). However, sweep efficiency can decrease as enrichment (or pressure) increases.4 This decrease is not because of viscosity ratio or density contrast, which decrease as enrichment (or pressure) increases. However, it may be caused by the interactions of phase behavior and heterogeneous flow field.7,8 The second objective of this work is to study this interaction at a laboratory scale. Blunt et al.1 and Blunt and Christie9 have advanced the empirical theory for viscous fingering significantly. Application of this theory to compositional floods assumes that the 1D average compositional path in fingered floods is the same as in 1D floods. This was the case for their 2D fine-grid simulations in a low-heterogeneity permeability field in the absence of gravity segregation. Fingers were small compared with the widths of the systems in all their examples. The third objective of this study is to determine the effect of bypassing on composition path of our laboratory floods and to verify whether the assumption of Blunt et al.'s1 theory is applicable in these corefloods. In the next two sections, we describe our experimental program and discuss the results. Then, we describe the modeling of experimental floods, the simulation of a three-component model system, and the interactions between phase behavior and flow bypassing. The last section summarizes our findings. Experimental Procedure The experimental program consisted of several corefloods on a vertically mounted 8-in.-long by 1.5-in.-diameter core (Fig. 1). Flow direction was from top to bottom. The experimental setup included a composite core holder with a constant-temperature jacket, an injection module consisting of two pressure vessels for fluid transfer and gas injection and for overburden control, and a production module with a backpressure regulator (BPR) and a graduated centrifuge tube for recording recovery volumes. A Techni care Deltascan 2020 CT scanner oriented for vertical corefloods was used for the experiments. To achieve a complete core scan within 20 minutes, we used a 4.9-in. scan diameter, a 0.3-in. scan thickness, and 16 slices. Five corefloods were conducted: a matched-density/-viscosity miscible flood, a matched-density/adverse-viscosity miscible flood, an ethane flood of oil at 10 mL/hr, an ethane flood at 1 mL/hr, and a hydrocarbon gasflood of oil. All floods were conducted at irreducible water saturation, a 1,650-psi outlet pressure, and a 65°F system temperature. After each experiment, the core was cleaned with decalin, then resaturated with the particular "oil." The oil viscosity was ˜80 cp, and the ethane and hydrocarbon-solvent viscosities were ˜0.05 cp.
Distributed-memory parallel computer architectures appear to offer high performance at moderate cost for reservoir simulation applications. In particular, the simulation of compositional reservoir phenomena shows great promise for parallel applications because of the large parallel content of the compositional formulations. This paper focuses on the application of a distributed-memory parallel computer, the iPSC/860, to the solution of two compositional simulations: one based on the Third SPE Comparative Solution problem and another based on a real production compositional model. An improved linear equation solution technique based on multigrid and domain decomposition methods is compared with other techniques in serial and parallel environments. For a hypothetical heterogeneous example, the new technique showed a high degree of parallel efficiency. Finally, results show that performance of the compositional simulations with a fully parallel simulator is comparable to that of current mainframe supercomputers.
Recent architectural advances in the computer industry focus on the numerical solution of intensive flows, such as in oil reservoirs, on several processors simultaneously. Different computer architectures evolved from connection of these processors: shared memory with a few processors, distributed memory with up to a few hundred processors, and massively parallel with several thousand processors. Oil industry researchers are developing efficient techniques to improve hydrocarbon recovery in reservoirs by use of these computers. In this work, a generic approach is developed to solve the large system of sparse linear equations that arises in reservoir simulation. This approach uses a combination of domain decomposition and multigrid techniques that results in efficient and robust algorithms for sequential computers with one processor and for parallel computers with few to several tens of processors. The efficiency and robustness of these methods is comparable with widely used sequential solvers for problems of practical interest, which include implicit wells and faults. In parallel, these methods prove to be an order of magnitude faster on a 32-node iPSC/S60 hypercube.
Over the past fifteen years high performance computing has had a significant impact on the evolution of numerical predictive methods for improved recovery from hydrocarbon reservoirs. The complexity of reservoir simulation models has led to computational requirements that have consistently taxed the fastest computers. This work discusses how current state-of-the-art parallel architectures have been investigated to allow models which more closely approach realistic simulations while emphasizing accuracy and efficiency of the models. Several modeling approaches have been investigated on different parallel architectures. These investigations have, in general, shown great promise for the use of massively parallel computers for reservoir simulations. Despite these results, reservoir simulation has been slow in moving toward parallel computing in a production environment. There appear to be several reasons for this. First, the recursive natures of the existing linear solution techniques for reservoir modeling are not readily adaptable to massively parallel architectures. The trade-off between load balancing and global data structure has yet to be thoroughly investigated. Finally, the role of well and facility constraints and production optimization in massively parallel processing may lead to severe serial bottlenecks. Several approaches for the solution of these difficulties are presented. INTRODUCTION From the earliest stages of reservoir simulation, models have continued to tax the capabilities of the largest computers. From both a numerical and a physical standpoint larger and larger grids have been required to adequately model processes occurring the reservoirs. Beginning in the mid-1970's the introduction of supercomputing through vectorization completely changed the approach which had been taken toward development of numerical models for reservoir simulation. Although these computers significantly advanced the speed at which computations could be made, the availability of leveraging of computational power through vectorization led to significant reorganization and reworking of existing models. 1–3 Several publications in the literature have dealt with the application of parallel computing to petroleum reservoir simulation in shared memory parallel environments. Scott et a1.4 investigated the parallelization of the coefficient routines and linear equation solvers for a black-oil model on a Denekor HEP. Chien et a1.5 investigated compositional modeling in parallel on a CRAY X-MP 4/16. Barna and Home6 applied parallel computing using a nonlinear equation solver for the black-oil case on the Encore Multimax. Killough et al.7 looked at parallel linear equation solvers on both the eRAY X-MP and IBM 3090. Each of these applications involved the use of a shared-memory parallel computer. The question still remained whether a distributed memory architecture could be efficiently utilized for simulation of petroleum reservoirs. More recently parallelization of reservoir simulators has been accomplished on distributed memory parallel computers. These parallelizations have been accomplished on both multiple-instruction, multiple datapath (MIMD) and single-instruction, multiple datapath (SIMD) architectures. Work by van Daalen et al. 8 showed a speedup of a factor of forty on sixty processors on the Tranputer-based Meiko computer. Wheeler and Smith showed that black oil modeling could be performed efficiently on a hypercube. The application of compositional reservoir modeling to the distributed memory, message passing, INTEL iPSC/2 Hypercube was investigated by Killough and BhogeswaralO, ll.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.