Exploitation of shale plays is viewed by many as a major resource for future economies of many countries. To date, most exploitation of this resource has centered on North America where the existing high onshore well count previously drilled for conventional resources guides pursuit of unconventional plays. The situation is, for example, very different across onshore Europe, which has a low existing well count for conventional resources and notable socio-political and infrastructure challenges, such as high-population densities. To be successful, the delineation of European shale plays must use existing data for exploration and drill a reduced number of wells during exploration. During the exploration phase, the ability to manage uncertainty and make informed decisions across the potential shale plays is vital. An optimal approach is proposed, whereby all possible surface and subsurface sources of data are integrated and exploration screening is done based on advanced petroleum systems modeling. To illustrate the approach, data from onshore Netherlands has been selected. The West Netherlands Basin and Roer Valley Graben contain organic-rich Jurassic sequences within the Altena Group, including the well-known Posidonia Shale Formation. This formation is currently being targeted as a potential unconventional resource. A fully integrated 3D geological model – including an advanced 3D petroleum systems model – is presented, which includes critical spatial information, such as geographical terrains and surface constraints. Results from this approach clearly demonstrate areas of higher prospectivity, and, importantly, their associated uncertainty. This allows E&P companies to select areas that have the best chance of success.
A well log measurement can be modelled as the sum of three components: the formation signal, random noise, and systematic error. The sources for systematic error include tool malfunctions, shop and field miscalibrations, operator error, and inherent hardware design limitations. Log calibration, more commonly referred to as log normalization, is the process of applying corrective shifts to well logs to minimize the systematic error.In this paper we develop a machine learning approach to the multi-well log normalization problem which we believe is particularly applicable in unconventional field studies involving hundreds of wells of varying data quality and vintage. We start by applying machine learning to the multi-well normalization problem, where the reference unit and reference wells are selected by the geoscientist. The reference unit is typically a laterally extensive stratigraphic interval with a consistent log response over the area of interest, which in our case is a tight limestone with small amounts of dolomite and/or silt. The reference wells are those that do not require any normalization. A predictive machine learning model is trained using log data from the reference unit in the reference wells, and a regression-based optimization algorithm is used to solve for constant shifts which are applied as normalization corrections for density and neutron logs in the remaining wells. The process of selecting reference wells and picking the boundaries of the calibration unit can be subjective; it is influenced by the experience of the geoscientist with the area of interest, the pressures of project timelines and the availability of sufficient resources. The impact of these human-introduced biases can be severe in large projects where a team of geoscientists is engaged to process hundreds of wells: inconsistent normalization practices can lead to large errors in computed reservoir properties. As a solution to minimize such inconsistencies, we propose to extend the use of machine learning to create a seamless workflow that is automated and requires minimal user involvement in the execution phase. Towards this objective, we incorporate an additional machine learning component, which eliminates the requirement for a-priori knowledge of the reference unit boundaries. The resulting workflow produces consistent and high-quality results that compare very well to those produced by manual Log Normalization workflows run by experts. Comparison of machine learning based results against expert answers show that the machine learning approach offers an efficient alternative to manual log normalization where gains can be significant in projects involving large numbers of wells, as in the case of unconventional plays.
Production forecasting and hydrocarbon reserve estimation play a major role in production planning and field evaluation. Traditional methods of production forecasting use historical production data and do not account for completion and geolocation attributes that limit their prediction ability, especially for wells with a short production history. In this paper, we present a novel data-driven approach that accounts for the completion and geolocation parameters of a well along with its historical production data to forecast production. In this work, we used supervised learning to develop an ensemble of machine learning (ML) based models to forecast production behavior of oil and gas wells. The developed models account for historical production data, geolocation parameters, and completion parameters as features. The dataset used to create the models comprises publicly available data from 80,000 unconventional wells in North America. The developed models are rigorously tested against 5% of the original data set. The models are systematically studied and compared against traditional forecasting techniques and results are presented here. The created ensemble of models was tested by forecasting the production of 3,700 wells and the obtained results were compared against real production data. We show that the models clearly capture the natural decline trend of the produced hydrocarbon. In cases where the natural decline of the well has been temporarily modified, possibly due to operations, the production during other periods of the time series matches the prediction. This indicates that, unlike in traditional methods, such changes don't adversely impact the forecasting ability of our method. We also conducted a systematic investigation and compared the forecast from the developed model against the forecast from a traditional method (Arps, 1945). During the comparison, it was observed that for short-production history wells (available production data from 2 to 12 months), the error rate in the predicted production behavior from traditional methods was higher when compared with the developed method. As the quantity of historical production data increases, the forecasting ability of traditional methods improves. By comparison, the decline from the developed method matches the real production data for both short- and long-production history wells, and clearly outperforms the traditional methods based on blind tests. In this work, we present a novel ML based approach for forecasting production. This approach overcomes the challenge of the traditional time-series forecasting techniques that use only the past data for forecasting. It also incorporates static parameters (completion and geolocation parameters) in its architecture. The developed method leverages statistical averaging by employing an ensemble of random forest models, making the developed approach better than traditional ML based methods (ARIMA and LSTM) for forecasting time-series data.
Once considered a dangerous nuisance in the mining industry, coal seam gas (CSG), or coalbed methane, is now seen as an abundant clean energy supply that will help replace other diminishing hydrocarbon reserves. The nature of the coal-bearing sequences, however, makes them difficult to drill and produce profitably. Major challenges include multiple thin-bedded zones, strong variations in quality of coals, and large volumes of formation water that must be removed before the gas can flow to the surface. This paper describes a new technology- and knowledge-driven approach to address these challenges in the exploration and development stages of a CSG field. During exploration, combining multiple data types in a single model reduces dramatically the uncertainty around coal seam distribution and gas-in-place estimation. In field development planning, the unification integrates static and dynamic data to enable a better understanding of the field's producibility. The exploitation of CSG requires analyses of many scenarios and uncertainties. In addition, it requires hundreds of wells to be drilled in a short period of time. Consideration of the high levels of uncertainty and the integration of large volumes of newly acquired data can be achieved efficiently only in a unified software environment that is associated with a strong knowledge management system. In fact, one of the key benefits of this new unified approach includes the ability to update models and test multiple scenarios at any stage of the field life cycle, and track the processes with a strong audit trail. Data used for demonstration in this paper are from the Surat basin in Central Queensland, Australia.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.