The determination of the optimal type, location, and trajectory of a nonconventional well is extremely challenging. The problem is more complicated than other well optimization problems because of the wide variety of possible well types (i.e., number, location, and orientation of laterals) that must be considered. In this paper, a general methodology for the optimization of nonconventional wells is presented. The optimization procedure entails a Genetic Algorithm (GA) applied in conjunction with several acceleration routines that include an artificial neural network, a hill climber, and a near-well upscaling technique. The overall methodology is then applied to a number of problems involving different reservoir types and fluid systems. It is shown that the objective function (cumulative oil produced or net present value of the project) is always increased relative to its value in the first generation of the optimization, in some cases by 30% or more. The optimal well type is found to vary depending on the reservoir model and objective function. The effects of reservoir uncertainty are also included in some of the optimizations. It is shown that the optimal type of well can differ, depending on whether single or multiple realizations of the reservoir geology are considered.
The determination of the optimal type, location and trajectory of a nonconventional well is very challenging. The problem is more complicated than other well optimization problems because of the wide variety of possible well types (i.e., number, location and orientation of laterals) that must be considered. In this paper, a general methodology for the optimization of nonconventional wells is presented. The optimization procedure entails a Genetic Algorithm applied in conjunction with several acceleration routines that include an artificial neural network, a hill climber, and a near-well upscaling technique. The overall methodology is then applied to a number of problems involving different reservoir types and fluid systems. It is shown that the objective function (cumulative oil produced or net present value of the project) is always increased relative to its value in the first generation of the optimization, in some cases by 30% or more. The optimal well type is found to vary depending on the reservoir model and objective function. The effects of reservoir uncertainty are also included in some of the optimizations. It is shown that the optimal type of well can differ depending on whether single or multiple realizations of the reservoir geology are considered. Introduction Nonconventional wells, i.e., wells with an arbitrary trajectory and a number of branches or laterals, offer great potential for oil recovery. However, the complexity and generality of such wells makes it difficult to deploy them optimally in a field setting. Basic questions, such as the optimal well type (i.e., number and orientation of laterals), well location and trajectory are difficult to address because the number of scenarios that must be evaluated is exceedingly large. The problem is considerably more complex than one involving monobore wells (straight vertical, inclined or horizontal wells) because the additional unknowns related to the laterals greatly increase the size of the solution space that must be investigated. The problem becomes even more daunting when we attempt to account for the uncertainty in the geologic description of the reservoir. The purpose of this paper is to introduce and apply a general procedure for the optimum deployment of nonconventional wells (NCWs). We consider both monobore and multilateral wells. The optimization approach entails the application of a Genetic Algorithm (GA), along with a number of acceleration or "helper" routines, used in conjunction with a reservoir simulator. We demonstrate the procedure on a number of realistic example cases, in which the well location and trajectory, number of laterals, and well pressures or rates, are optimized. We show that the resulting reservoir performance is significantly better than that which results from the best well in the first generation of the optimization. The goal of any optimization procedure is to find the highest or lowest global value of an objective function. If the objective function does not have a smooth surface or the search space is large, it is generally impossible to construct optimization techniques that are guaranteed to find the global optimum.1 Optimization techniques often use the derivative or gradient of the objective function in their search. The performance of such gradient based algorithms degrades with the size of the problem and strongly depends on the initial guess of the solution vector. Particularly for large problems, these methods are likely to result in convergence to a local optimum, which is often in the neighborhood of the initial guess. Optimization of NCWs is a complex problem because there are many variables to consider. The objective function in this case could be the net present value (NPV) of the project, the cumulative oil production from the field, or some other criteria. These objective functions will be of a high dimension and will in general have extremely rough surfaces. Gradient based optimization methods are not well suited to problems of this type. It is more appropriate to apply a stochastic search algorithm in order to avoid getting trapped in the numerous local optima.
Experimental design method is an alternative to traditional sensitivity analysis. The basic idea behind this methodology is to vary multiple parameters at the same time so that maximum inference can be attained with minimum cost. Once the appropriate design is established and the corresponding experiments (simulations) are performed, the results can be investigated by fitting them to a response surface. This surface is usually an analytical or a simple numerical function which is cheap to sample. Therefore it can be used as a proxy to reservoir simulation to quantify the uncertainties. Designing an efficient sensitivity study poses two main issues:Designing a parameter space sampling strategy and carrying out experiments.Analyzing the results of the experiments. (Response surface generation) In this paper we investigate these steps by testing various experimental designs and response surface methodologies on synthetic and real reservoir models. We compared conventional designs such as Plackett-Burman, central composite and D-optimal designs and a space filling design technique that aim at optimizing the coverage of the parameter space. We analyzed these experiments using linear, second order polynomials and more complex response surfaces such as kriging, splines and neural networks. We compared these response surfaces in terms of their capability to estimate the statistics of the uncertainty (i.e., P10, P50 and P90 values), their estimation accuracy and their capability to estimate the influential parameters (heavy-hitters). Comparison with our exhaustive simulations showed that experiments generated by the space filling design and analyzed with kriging, splines and quadratic polynomials gave the greatest accuracy while traditional designs and the associated response surfaces performed poorly for some of the cases we studied. We also found good agreement between polynomials and complex response surfaces in terms of estimating the effect of each parameter on the response surface. Introduction Reservoir simulators are capable of integrating detailed static geological information with dynamic engineering data to represent the complex fluid flow in porous media. Therefore they have been used extensively for planning and evaluation of field development projects. Usually economical parameters such as net present value (NPV) or recovery estimates such as cumulative oil production are used to assess the value of different alternatives of a development study. Since most of the inputs to the simulation studies are usually uncertain and uncontrollable (like static reservoir properties), many sensitivity studies have to be performed, which might be prohibitive due to costly simulations. Experimental design methodology offers not only an efficient way of assessing uncertainties by providing inference with minimum number of simulations, but also can identify the key parameters governing uncertainty in economic and production forecast, which might guide the data acquisition strategy during the early phases of a field development project.[1] The commonly used workflow for this purpose is as follows:Define a large set of potential key parameters and their probability distributions.Perform a low level experimental design study, such as Plackett-Burman, which combines the high and low value of the key parameters.Perform simulations corresponding to each of the experiments.Fit the economical or recovery estimates obtained from simulations to a simple response surface, which is usually a line.Using the probability distributions attached to the parameters, perform a Monte Carlo simulation on the response surfaceGenerate a tornado diagram to rank the effect of each parameter on the economical or recovery estimates.Screen the heavy-hitters. From the tornado diagram.Perform a more detailed design such as full/fractional factorial, D-optimal, Box-Behnken, central composite, etc. with the heavy-hitters.Repeat steps 3 and 4.Perform a Monte Carlo simulation on the new response surface to get the probability density function (pdf) of the economical or recovery estimates.
Faults can act as fluid flow barriers, conduits, or barrier/conduit systems in reservoirs. Their accurate representation in reservoir flow simulations is essential if realistic predictions are to be attained. In this work we compute the effective flow characteristics of faults using fine-scale field-based data. The faults we focus on are in porous aeolian sandstone and were formed by shearing along pre-existing joint zones. To find the bulk flow characteristics of the fault zones, we develop a computationally efficient upscaling methodology that combines numerical flow modeling and power averaging. By analyzing faults with different slip magnitudes, we are able to produce a relationship between fault permeability and fault slip. Slip magnitude is one of the few fault parameters that can be measured remotely in the subsurface and we show how it can be used to estimate the variation in permeability along a fault. We present three different flow simulation scenarios using variable fault properties derived using our new procedure. For each scenario, we present a second tuned case where we replace our variable fault-zone permeability by a fault with a constant permeability and width. In one case, we find no significant difference in flow response between the variable and constant permeability faults. The other two cases display differences, mostly with regard to breakthrough time and liquid production rates. Because the reservoir flows considered here are relatively simple, we postulate that the differences between the variable and constant permeability fault descriptions will become greater for more complex systems. Introduction Faults are common features in oil and gas reservoirs. They can act to impede or enhance fluid flow dramatically,1 thereby playing an important role in reservoir performance.2 However, despite their strong impact on flow, typical reservoir simulation models represent faults in a highly simplified manner. Faults in these models are often used as adjustable parameters, with their gross impact on flow behavior "tuned" so the global model predictions agree with observed production data. The use of these models as predictive tools is therefore quite limited in many cases. The purpose of this paper is to develop and apply a new procedure for assigning permeability values to grid blocks representing the fault zone in flow simulation models. We consider the case of faults in sandstone reservoirs. The grid block permeability values are determined using the results from detailed analog outcrop studies3 and from previously developed numerical solutions using a power averaging technique3,4 and a full numerical solution5,6 for computing fault zone permeabilities. These results provide an estimate of fault zone permeability, on the scale of 1–20 meters, as a function of the local fault slip magnitude. By combining these results with larger scale geologic measurements that provide estimates of the variation in fault slip over the length of the fault, we are able to estimate fault zone permeability along the entire fault. Through the application of this procedure, we demonstrate the impact of detailed fault zone descriptions on large-scale reservoir flows. Comparisons with large-scale flow results using a simple fault treatment, as commonly employed in current practice, are also presented. These comparisons demonstrate the qualitative improvements obtained using our procedure and the inaccuracies inherent in the simpler approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.