Objectives/Scope Hydraulic fracturing is today a standard when developing unconventional reservoir plays. This is studied through different models, based on a great deal of characterization data gathering and analysis. Unfortunately, numerical limitations impose drastic simplifications (number of fractures, some data being ignored…) leading to simple fracture geometries, lacking observed complexity. This limits any design optimization expectation. Our objective is to show that calibration data used for simpler models, along microseismic measurements, can lead to more realistic hydraulic fracturing geometries. Results can be linked to a reservoir platform, forecasting production. The presented computationally efficient method, within which sensitivity is performed, highlights key parameters governing the stimulation process. This study shows that the tool used is tailored for practical scenario design and evaluation. Methods, Procedures, Process The method used to generate realistic fracture geometries implies information at all scales (seismic, log, cores…) as well as numerical tools able to handle geomechanics and fluid flow, over a great number of fractures (as required by the characterization). Thus all data is input into one 3D Representative Deformable Discrete Fracture Network (DDFN), simulating the hydraulic stimulation. Characterization is based on geostatistical concepts applied to both natural and hydraulically induced fractures, driven by geological and geomechanical data. The process is simulated using a one phase hydrodynamic model within the DDFN (specific discretization) under far stress conditions. Fractures behavior is governed by geomechanical laws, reversible and non-reversible, with an approximate proppant model. Various scenarios are tested according to either geomechanical uncertain parameters, or characterization ones. Observed in-situ Bottom Hole Pressure (BHP) and microseismic characteristics (shape, frequency…) are then history-matched. Results, Observations, Conclusions For each simulated scenario, quality of the history match is shown and discussed, stressing the representativity of the data involved. The method has shown to be computationally efficient and robust enough to support hundred thousands of fractures while at the same time being able to simulate simpler cases. Also, within the studied framework, ties with already existing reservoir platform are shown. Advantages of such an approach are highlighted including current limitations of classical reservoir models. Novel/Additive Information This work undergone at different scales demonstrate the new possibilities of computational robust algorithms, within an approach considering both geological settings and geomechanical properties. The model offers the possibility to integrate several scales to an adaptive discretization scheme.
-In this work, we show how the a posteriori error estimation techniques proposed in Computers & Mathematics with Applications 68, 2331-2347] can be efficiently employed to improve the performance of a compositional reservoir simulator dedicated to Enhanced Oil Recovery (EOR) processes. This a posteriori error estimate allows to propose an adaptive mesh refinement algorithm leading to significant gain in terms of the number of cells in mesh compared to a fine mesh resolution, and to formulate criteria for stopping the iterative algebraic solver and the iterative linearization solver without any loss of precision. The emphasis of this paper is on the computational cost of the error estimators. We introduce an efficient computation using a practical simplified formula that can be easily implemented in a reservoir simulation code. Numerical results for a real-life reservoir engineering example in three dimensions show that we obtain a significant gain in CPU times without affecting the accuracy of the oil production forecast.
fax 01-972-952-9435. AbstractNew parallel reservoir simulator software designed for Linux clusters enable to overcome hardware limitation and to simulate models with large amount of data. Reservoir engineering industry is very interested in using ever growing dataset with more and more complex physics and detailed models. The key issue still remains running simulations in an acceptable CPU time. As, the trend in hardware technologies is not to improve drastically the performance of individual CPUs but to facilitate the aggregation of computation facilities (with high bandwidth network, multi-core architectures ...), the challenge is to improve the efficiency of reservoir simulation software on a large number of processors.New numerical difficulties and performance problems appear when the number of cells and the number of processors are growing. As a matter of fact, the architecture of Linux clusters is very sensible to memory distribution and load balancing: • the cost of parallel solver algorithm is usually sensible to the size of the reservoir model (lack of scalability) and the consequences on CPU performance can no more be neglected; • the domain decomposition algorithms used to distribute data between processors have a great influence on the computing load balancing between processors; • using adaptive numerical schemes with dynamic space criteria (AIM schemes, flash algorithms based on the thermodynamic state of each cell) is a source of unbalance that cannot statically be resolved; • simulation result storages on irregular data structures, such as unstructured grids, multilateral smart wells and perforated cells, lead to store an important amount of information during the simulation. With the variety of IO subsystems found on Linux clusters the simulator must be able to adapt its IO strategy to the underlying IO library/file system and hardware.In this paper, we present different approaches to overcome these kinds of problems. We discuss technical choices such like:• advanced scalable linear solver algorithm ;• load balancing issue with different domain decomposition strategies ; • dynamic space criteria, mesh partitioner strategy and parallel solver performance management; • flexible IO strategy from simple file system to more complex parallel file system or database.We have developed and benchmarked these different solutions on published reference large scale problems and actual case studies with several tens millions of cells. We analyze the results and discuss the efficiency of each solution to overcome the scalability difficulties and performance limitations due to load unbalance. IntroductionImproving robustness and performances of parallel reservoir simulators on new high performance computing architectures still remains a key issue to deal with the ever growing complexity and size of reservoir models. The simulator discussed in this paper is a multi purpose parallel reservoir simulator which implements the physical options necessary to sophisticated reservoir engineering such as black-oil, multi-c...
Low permeability reservoirs are currently being produced using horizontal wells and massive hydraulic fracturing operations. The design of stimulation jobs requires an integrated knowledge of the reservoir (lithology, mechanical properties, fracture properties, PVT, etc), needing calibration and scenario simulation capacities. Current tools permitting such a workflow exist, yet rarely fully integrated within a single package. In this paper we aim to show the advantages of using two new tools, presently developed as prototypes, namely an unstructured fracture model and a multiphysics coding platform designed to integrate all concepts currently under research pertaining to unconventional reservoirs. Characterization of unconventional reservoirs implies the conciliations of several scales, demanding the integration of a potentially large fracture information database. Within this "rich" database, today's models have difficulties integrating all this information. Thus, improving upon current Discrete Fracture Networks (DFN) would require many fractures to be accounted for (up to 500,000). Coupling of such DFN's to reservoir modeling packages often use up-scaling methods, resulting in models which in turn are simulated using extensions of classical dual continuum models. Current reservoir models do not integrate all physical pertinent phenomena though as being important for gas or multiphase production such as dynamic permeability varying with pressure, nonequilibrium effects, multicomponent adsorption models, diffusion effects or proper transfer functions between matrix and fractures. Using a realistic example inspired from field data, we show how the construction of a fracture model using a consistent Discrete and Deformable Fracture Networks (DDFN), tractable for multiphase flow reservoir simulations, can help describing a complex fracturing case. The use of a coding platform tailored to pertinent unconventional physics is discussed, through examples of developed multiphysics Geoscience applications. The example shows how the integration of the representation of a multistage operations through a DDFN model, using the joint characterization of a field natural fracture system and a propagating fracture network corresponding to the hydraulic fracturing process, calibrated on the BHP and microseismic cloud, is input as a specific unstructured dual discretization into a reservoir model. This explicit description of the fracture geometry is coupled to a non-discretized matrix refinement function accounting for matrix heterogeneities, well-adapted to the dynamic pressure behavior observed in such reservoirs. A generalized multiple interacting continua formulation (named "transient transfer influence function") is used within the matrix medium, allowing the simulation of a longer transition period, typical of many unconventional reservoirs, thus improving matrix contribution during hydraulic fracturing. Because the full process may include several hundred thousand fractures and approximately the same number of cells for the matrix medium, we show how run time performance is improved through a preconditioning technique which reduces the condition number of the matrix associated to the linear system, and speed up the iterative parallel linear solver convergence. The discussion of results obtained by using the integrated DDFN is extended to the potential use of an adapted computational platform which could be used for the inclusion of specific physics pertinent to unconventional reservoirs. This DDFN approach is able to computationally handle 100,000's of fractured coupled to a fluid flow simulator. The platform on which it was implemented could be extended to multiphysics problems, essential for unconventional resources.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.