Abstract. This paper describes the first major attempt to compare seven different inverse approaches for identifying aquifer transmissivity. The ultimate objective was to determine which of several geostatistical inverse techniques is better suited for making probabilistic forecasts of the potential transport of solutes in an aquifer where spatial variability and uncertainty in hydrogeologic properties are significant. Seven geostatistical methods (fast Fourier transform (FF), fractal simulation (FS), linearized cokriging (LC), linearized semianalytical (LS), maximum likelihood (ML), pilot point (PP), and sequential self-calibration (SS)) were compared on four synthetic data sets. Each data set had specific features meeting (or not) classical assumptions about stationarity, amenability to a geostatistical description, etc. The comparison of the outcome of the methods is based on the prediction of travel times and travel paths taken by conservative solutes migrating in the aquifer for a distance of 5 km. Four of the methods, LS, ML, PP, and SS, were identified as being approximately equivalent for the specific problems considered. The magnitude of the variance of the transmissivity fields, which went as high as 10 times the generally accepted range for linearized approaches, was not a problem for the linearized methods when applied to stationary fields; that is, their inverse solutions and travel time predictions were as accurate as those of the nonlinear methods. Nonstationarity of the "true" transmissivity field, or the presence of "anomalies" such as high-permeability fracture zones was, however, more of a problem for the linearized methods. The importance of the proper selection of the semivariogram of the 1og•0 (T) field (or the ability of the method to optimize this variogram iteratively) was found to have a significant impact on the accuracy and precision of the travel time predictions. Use of additional transient information from pumping tests did not result in major changes in the outcome. While the methods differ in their underlying theory, and the codes developed to implement the theories were limited to varying degrees, the most important factor for achieving a successful solution was the time and experience devoted by the user of the method. •2Stanford University, Stanford, California.•3Duke Engineering and Services, Inc., Austin, Texas.•4University of Arizona, Tucson.•Slnstitut Franqais du Pftrole, Rueil-Malmaison, France.•6University of California, Berkeley.Copyright 1998 by the American Geophysical Union. Paper number 98WR00003.0043-1397/98/98WR-00003509.00 tion, or performance assessment of planned waste disposal projects, it is no longer enough to determine the "best estimate" of the distribution in space of the aquifer parameters. A measure of the uncertainty associated with this estimation is also needed. Geostatistical techniques are ideally suited to filling this role. Basically, geostatistics fits a "structural model" to the data, reflecting their spatial variability. Then, both "best estim...
No abstract
Quantitative methods taking into account the sedimentological characteristics will answer the needs of reservoir engineers. We propose here a geostatistical method for the conditional modelling of the facies of a sedimentary fluvio-deltaic series. This model was elaborated jointly by I.F.P. and the Paris School of Mines, with the aim of modelling reservoir heterogeneities. From the sedimentological study contained in the paper by C. Ravenne et al., we present several simulations, conditioned by "drill-core" taken from the outcrop. The block permeabilities are then calculated from the values given to the facies. Introduction Reservoir engineers have, for a long time, been asking what type of models could be entered into reservoir simulators. The geological models that are normally used are essentially qualitative and so it is difficult to numerize them, except by correlating the drillholes facies, which is not always self evident. This often leads to models with too many constraints for simulating reservoirs. Their dynamic behaviour worsens considerably going from very continuous layers of sandstone to disseminated lenses. (Fig. 1). This raises the question of how to characterize the geometry of the sandstone given the drillhole data, the geologist's interpretation and also other measurements (e.g. seismic recordings), and how to model the reservoir levels to suit this shape. As well as this, the models must match the lithology along the drillholes. In this article, we present a method for conditionally modelling the lithology which is designed for sedimentary processes. This approach was tested using the geological section of a cliff-face in Yorkshire (England) which shows a fluvio-deltaic environment similar to some of the levels in the Brent formation in the North Sea. A detailed description of the geology of the cliff-face studied and of the approach used in this project have been presented by Ravenne et al. Here we shall only consider the problem of modelling random sets by using a probabilistic method for representing the spatial distribution of the facies (sandstone, shaly sandstone, shale) in a heterogeneous reservoir. Before presenting the method, we review the main procedures for modelling random sets, that are used in the petroleum industry. REVIEW OF THE EXISTING METHODS Boolean Sets A simple way is to consider a heterogeneous medium as consisting of sandstone lenses in a shale matrix (or vice versa). Boolean sets (Matheron, Serra, Jeulin) are a mathematical way for modelling this type of deposit, that has been used for many years in other fields (Fig. 2). This method consists of putting lenses of a predetermined shape (e.g. ellipses or rectangles) at random points in the domain under study (i.e. the points are statistically uniformly distributed in space). Lenses are not correlated. The advantage of this approach is that it is easy to use in 2D or 3D spaces. It only depends on a few parameters: the number of seed points per unit space (called density), the shape of the lenses (fixed or variable), their size and orientation. The model is very flexible. The parameters can be modified locally in order to reproduce the real phenomenon more accurately. Clearly the more complicated the model is, the more parameters there are to fit but this can be overcome by fitting them by trial and error. P. 591^
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.