Summary This paper presents drilling, completion, well-performance, and reservoir-characterization results of a recently drilled maximum-reservoir-contact (MRC) well in the Shaybah field with a total of eight laterals and an aggregate reservoir contact of 12.3 km (7.6 miles). The well was drilled as part of a pilot program to evaluate both the practical challenges and the reservoir performance impact of MRC wells. The results to date on eight MRC wells in Shaybah indicate significant sustainable gains in well productivities as well as reductions in unit-development costs. A useful byproduct of MRC drilling is the enhancement achieved in reservoir characterization. These benefits point to MRC wells as disruptive technologies (DTs)1,2 that have positive implications for developing tight-facies reservoirs.
The Haradh III project came on stream in February 2006, adding 300,000 B/D of Arabian light crude production capacity to Ghawar, the world's largest oil field. The project's main significance, however, derives from the fact that it sets a milestone for smart technologies at a scale and complexity unprecedented for Saudi Aramco and, arguably, for the industry. Haradh III might be regarded as the entry point to a new era in upstream projects and specifically into the domain of real-time reservoir management. The project spanned a period of 21 months. It entailed construction of a grassroots surface-facility network, integrated with a complex subsurface development program. Maximum-reservoir-contact (MRC) wells, smart completions, geosteering, and i-field features provided the four main technology components. Their efficient integration was the key to the project's success. Background Haradh constitutes the southernmost portion of the Ghawar complex and covers an area 75 km long and 26 km wide at its widest section (Fig. 1). The field consists of three subsegments of approximately equivalent reserves, with an aggregate oil initially in place of 38 billion STB. Initial production from Haradh I occurred in May 1996, followed up by Haradh II and Haradh III in April 2003 and February 2006, respectively. The field developments, occurring over a span of a decade, offer a unique opportunity in gauging the impact of technologies, the main thrust of this article. Haradh I was developed exclusively with vertical wells, whereas horizontal completions provided the primary configuration for producers/injectors in Haradh II. Haradh III, the main focus here, was developed by relying mainly on smart MRC completions within an i-field framework (Fig. 2). The total Haradh production capacity is 900,000 B/D, with equal contributions from the three respective subsegments I, II, and III. Arab-D, the producing horizon, belongs to the lower member of the Arab formation of the Jurassic period. It is characterized by a complex sequence of anhydrite and limestone events, with varying degrees of "'dolomitization'." Faults, fractures, and fracture swarms were known to be part of the regional geology and attracted considerable attention in the project planning, given their propensity for creating water-encroachment problems. Project Statistics Table 1 presents the key project statistics for Haradh III. It entailed a production target of 300,000 B/D, using 32 multilateral wells. A peripheral water-injection program (with an ultimate capacity of 560,000 BWPD) preceded the crude production by 4 months as part of the planned pressure-maintenance program.
Summary This paper evaluates reservoir performance forecasting. Actual field examples are discussed, comparing past forecasts with observed performances. The apparently weak correlation between advances in technology and forecasting accuracy is assessed. Parallel planning is presented as an approach that can significantly accelerate reservoir forecasts. The recognition of inevitable forecasting uncertainties constitutes the philosophical basis of parallel planning. Introduction We say that reservoir performance forecasting is not an exact science would be an understatement. Even with all the significant advances occurring across a wide spectrum of related areas, questions still remain regarding the reliability of reservoir predictions. In fact, our efforts today are aimed as much at defining the limits of uncertainty envelopes as at producing forecasts. The discussion here pursues the following questions:What are realistic accuracy expectations in performance forecasts? andAre our conventional thought processes in modeling inherently ill-structured to produce rapid forecasts? By their very nature, EOR processes introduce additional levels of complexity to forecasting. This discussion relates mainly to conventional reservoir systems. Forecasting Methods and Uncertainty Methods and Limits. Current reservoir performance forecasting methods can be classified into two broad categories: empirical and mathematical. This paper focuses on finite-difference methods because they represent the predominant industry-wide vehicle in reservoir evaluations. Empirical methods, such as decline curves, are useful, yet they have a limited application domain. Continuation of past production practices and mechanisms is a precondition for forecast reliability. Likewise, hybrid methods, while suitable for a wideclass of problems (e.g., miscible, pattern floods) have not yet fully matured to offer a universal forecasting capability. Lorenz recognized the stochastic nature and hence the inherent limitations of weather forecasting. Lorenz's celebrated "butterfly effect" example points to inevitable limits of predictability. The analogies between weather and reservoirs have been noted; specifically, the sensitivity of reservoir performance and hence forecasts to certain geologic parameters (i.e., flow boundary conditions) have been highlighted. This sensitivity suggests that performance forecasts will remain uncertain indefinitely. Both internal and external reservoir factors contribute to forecast uncertainties (Fig. 1). When model forecasts diverge from actual performance, distinctions among primary causes are sometimes lost. For example, accurate models may produce apparently poor forecasts when presumed field management strategies and facility outlays are not actually implemented as a result of external factors. When model forecasts duplicate actual performance, this can also be misinterpreted as model validation. In fact, the duplication could simply reflect compensating errors among the internal and external factors. The point here is that accurate forecasts do not mean accurate models. (The hypothetical corollary also appears noteworthy: poor forecasts do not necessarily equate to poor reservoir models.) The nature of the oil industry limits predictability of external factors, such as exact field operating practices. At best, multiple forecasts need to be developed for a range of external factors. Of the four uncertainty causes in Fig. 1, data quality and mathematical solutions are becoming less pronounced, and reservoir characterization and scale-up present the primary obstacles to improving performance forecasts. The lack of determinism in both external and internal factors suggests only the obvious:all reservoir performance forecasts carry a band of uncertainty. Ballin et al. attempted to quantify this uncertainty for a special class of problems. Haldorsen and Damsleth described a general methodology for producing stochastic forecasts. Discretization Both geostatistical and finite-difference models discretize reservoirs. The two models, however, have dissimilar discretization scales (geostatistical models use inches to several feet; finite-difference models use hundreds of feet). Current and projected hardware and software limitations suggest that the discretization gap between finite-difference and geostatistical models will not disappear for giant fields. Consider the multibillion-barrel ATL/INB field in West Africa. A finite-difference model using 1-ft3 cells will require about 0.5trillion cells. The corresponding figure for the Safaniya field in the MiddleEast is about 7 trillion cells. Cells (1 in.3) will suggest models with roughly800 trillion cells for the ATL/INB field. These numbers imply our indefiniteneed for a scale-up process and hence the resulting uncertainties. Homogenization An obvious outcome of the scale-up process is homogenization. Porosity/permeability transforms, often used to describe permeability fields, also contribute to homogenized property assignments in simulation models. Fig.2 gives the porosity/permeability core data for the ATL/INB field. This field exhibits a complex lithology of predominantly silica sands intermixed with dolomite. The use of a single-variable transform, represented by the solidline, filters the observed variability in the core data. An alternative approach that would reduce the homogenization effect is the use of "cloudtransforms" developed by Kasischke and Williams. Cloud transforms produceproperty representations in models that can mimic distributions observed in real data (e.g., cores or logs). Fig. 3 shows a sample distribution generated by a cloud transform for the Elk Hills 26 R reservoir. JPT P. 652^
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.