SUMMARY The marginal and bivariate distributions of the observations generated from a standard autoregressive moving average scheme are derived, assuming the noise to have a double‐exponential (Laplace) distribution. The distributions may differ substantially from their Gaussian counterparts. The AR(1) model with double‐exponential noise is applied to a series of weekly measurements of sulphate concentration and is shown to give a significantly better fit when compared with the Gaussian model.
Summary We present a two-stage stochastic model that caters to the large-scalegeological heterogeneities resulting from different rock types and the inherentspatial variability of rock properties. The suggested approach combines severalelements from a variety of models, methods, and algorithms that have emergedduring the last few years. This twostage procedure can be used to generateseveral geologically sound realizations of a reservoir in an efficient manner. Stage 1 preserves the important geological architecture, while Stage 2 providessmall-scale variability in the rock properties. At both stages, the stochasticmodels are conditional on the actual values observed in wells. Hence, everyrealization honors the observations. An example from a highly heterogeneous North Sea reservoir, deposited in an upper shore-face environment illustratesapplication of the model. Introduction As a result of high costs in offshore areas like the North Sea, only aminimum of exploration and appraisal wells can be justified before importantfield development decisions are made. The use of oversimplified geologicalmodels based on data from a limited number of widely spaced wells is probablyone of the most important reasons probably one of the most important reasonsfor the failures in predicting field performance. Oversimplification and theuse of performance. Oversimplification and the use of unrealistic geologicalmodels partly results from the paucity of well data but also results from theinappropriate use of available data. Experience shows, for example, that linearinterpolation of petrophysical characteristics between wells some kilometersapart usually will not give a realistic image of the heterogeneity required topredict fluid flow. To give a realistic description of the point-to-pointvariation, we resort to point-to-point variation, we resort to stochasticmodels and simulation. A reservoir is intrinsically deterministic. It exists, and its propertiesand features are potentially measurable at all scales. A potentially measurableat all scales. A reservoir is the product of many complex processes(sedimentation, erosion, burial, processes (sedimentation, erosion, burial, compaction, diagenesis, etc.) that operate over millions of years. Why, then, do we have to apply stochastic modeling? Haldorsen and Damsleth list thefollowing reasons:the incomplete information about a reservoir'sdimensions, internal architecture, and its rock-property variability at allscales;the complex spatial disposition of reservoir building blocks orfacies;the difficult-to-capture rock-property variability and variabilitystructure with spatial position and direction;the unknown position anddirection;the unknown relationships between the property value and thevolume of rock used for averaging (the scale problem);the relativeabundance of static (point values along the well for kH, Sw, and seismicdata) over dynamic (time-dependent effects, how the rock architecture affects arecovery process, etc.) reservoir data; andconvenience and speed--handdrawing reservoir architectur and point-value realizations in three-dimensionsis a very difficult and time-consuming process. The phenomena or variables that we normally describe with stochastic modelsare those that influence the amount, position, accessibility, and flow offluids through reservoirs. Thus, stochastic modeling or simulation in thiscontext usually refers to the generation of synthetic geological architectureand/or property fields in one, two, or three dimensions. The differentrealizations are conditioned to observations and possess a number of otherdesirable reservoir/geological features that should provide an improved basisfor recovery predictions. In addition, the uncertainty and risk associated withdifferent development options can be quantified better Dubrule gives a very good review of stochastic models for reservoirdescription, while Weber and van Geuns discuss the problem from a geologist'spoint of view, problem from a geologist's point of view, including some of thepossible pitfalls. Several authors present valuable contributions to the theoryand applications of stochastic modeling within the petroleum industry. Two-Stage Model To mimic reality, heterogeneity must be accounted for because it is one ofthe most important factors governing fluid flow. A number of differentapproaches exist for the stochastic modeling of heterogeneities. The choice oftechnique depends on (1) the objective and scale of the study, (2) theavailable input data, (3) the theoretical skills of the people involved, and(4) the software available. The goal is to improve the evaluation of theproduction capacity of the field by introducing small- and/or large-scaleheterogeneities into the reservoir description. Discrete vs. Continuous Models. The distinction between two main classes ofstochastic models (discrete and continuous) is convenient. A finerclassification of the discrete models has been proposed. Discrete models weredeveloped to describe geological features of a discrete nature (e.g., locationsof sand in fluvial depositional environments or locations of shales suspendedin sands). JPT P. 402
The current use of geostatistics in the petroleum industry is reviewed and the main issues that need to be tackled before the potential of geostatistics is fully realized are highlighted. The paper reviews and discusses three main topics: (1) geostatistics and geology; (2) multidisciplinary data integration; and (3) uncertainty quantification with multiple realizations. Our main message is that geostatistics has come a long way and reached maturity. In the years ahead, geostatisticians should focus less on the development of new algorithms and more on the training of geoscientists and the development of new work flows for decision support with geostatistics as the core.
Summary While the uncertainty related to mapping/quantification of hydrocarbons initially in place is well understood, there are open problems regarding the sources and propagation of errors/uncertainties in reservoir simulation. Based on measured data from only a small fraction of the total reservoir volume the challenge is to construct a reservoir model that utilizes the available data and minimizes errors in simulation results. Several studies have recently aimed at performing a total uncertainty analysis of reservoir simulation results. Underlying such work is usually a number of hypotheses/assumptions which are not always clearly expressed. In this paper we will discuss implications of some of the statistical methods that are commonly applied in uncertainty analysis and construction of a geological model. The Bayesian approach, where additional data can reduce uncertainties, is emphasized. Previous papers from Norsk Hydro and others have demonstrated the large variation in parameters obtained from routine and special core analysis on sample originating from the same geological building block (lithoface). This variation, which sometimes may be difficult to dissolve from uncertainty in the measurements, must be accounted for in models that describe small scale variation. Introduction Four categories of errors commonly occur in reservoir production estimates:Random measurements errors,Systematic errors (bias), including lack of representativeness,Upscaling errors, andModel errors, In this work we shall concentrate on the first three error types. As a basis for our discussion we shall assume the existence of a generic reservoir model consisting of rock and fluid parameters and a set of equations based on Darcy's law and conservation of mass. In order not to introduce too many complications we shall restrict the object of study to an isothermal black oil model consisting of two immiscible incompressible phases (water and oil) and incompressible rock. Given an initial state of the reservoir where all the rock parameters and all the saturations are known in all points, for a given recovery strategy it is in principle possible to infer the state of the reservoir and the oil and water production rates at any time. However, all the information and all the computing power needed to operate in this model is not available. For any piece of data that is introduced in a real world reservoir model there is uncertainty. While measurement precision (random errors) in most cases can be quantified, systematic errors can not be accounted for before they are known (- and when they are known they can usually be corrected!) Reservoir description is the process of assigning parameter values to the reservoir model from the partial information that is available. Even if compliance with the measurements put restrictions on the model there is still a lot of ambiguity left. In reservoir uncertainty analysis one tries to quantify this ambiguity in order to assess the uncertainty in the predictions from the reservoir model (Fig. 1).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.