Summary Production-data analysis is a practice fraught with inconsistencies. In the application of any single model, the quantity of answers arrived at by experienced evaluators is often equal to the number of evaluators analyzing the data. The cause of such inconsistency is bias on the part of evaluators. Although the colloquial use of bias typically implies systematic error, in this paper, we define bias as an expression of belief by the evaluator. With the lack of recognition of bias, no means exists with which to gauge its accuracy. A method that requires explicit expression of one's bias in time/rate decline behavior can provide an objective means with which to evaluate it. In this work, we present a machine-learning method to forecast production in unconventional, liquid-rich shale and gas-shale wells. Methods were developed for probabilistic decline-curve analysis with Markov-chain Monte Carlo simulation (MCMC) as a means to quantify reserves uncertainty, to incorporate prior information (i.e., bias), and to do so quickly. We extend the existing approaches by (a) a modified likelihood-distribution function to improve “learning” of production data, (b) integration of the transient hyperbolic model (THM) to explicitly define the various flow regimes present in unconventional wells, (c) a method for construction of discretized “percentile neighborhood” forecasts, and (d) construction of type wells from an analyzed well population. The accuracy and calibration of the method are demonstrated by an analysis of 136 wells in the Elm Coulee Field of the Bakken. Quantification of change in time/rate behavior caused by completion design, and the inference of physical behavior and properties, is demonstrated with a tight oil play in the Cleveland sand formation of the Anadarko Basin, as well as a shale play in the Wolfcamp formation of the Permian Basin. We show that this implementation of supervised machine learning, in combination with well-calibrated bias, improves the estimation of uncertainty of the posterior distribution of forecasts. In addition, hindcasts performed at various time intervals result in accurate estimation of mean five-year cumulative production. We observe that the “percentile neighborhood” forecasts are reasonable fits of production data comparable to those that may be created by a human evaluator, and that the type well computed is representative of the decline behavior of the well population upon which it is based. We conclude that, given the speed and accuracy of the process, machine learning is a reliable technology as defined by the US Securities and Exchange Commission (SEC), and can significantly improve the process of production forecasting by human evaluators for most unconventional wells with consistent trends of production history.
In this work we present the "Transient Hyperbolic" relation for the analysis and interpretation of time-rate performance data from wells in shale gas/liquids-rich shale plays. This model assumes a transient "b(t)" function which has constant early-time and constant late-time values, with an exponentially decaying transition function. This "b(t)" function is derived from the Gompertz logistic function. Our goal in developing this formulation is to represent the early-time (or clean-up) portion of the production profile (often a hyperbolic function), the transition to the terminal hyperbolic behavior, and finally, the terminal hyperbolic behaviorwhere the terminal hyperbolic is usually representative of the "non-interfering" vertical fractures in a multi-fractured horizontal well (MFHW). We could further modify this relation to have a terminal exponential decline (thought to represent the performance of the stimulated-reservoir-volume (or SRV)), but that is not a primary purpose of this work-our primary purpose is to demonstrate the "Transient Hyperbolic" nature of the flow behavior from a multi-fractured horizontal well in a shale gas/liquids-rich shale play. The technical contributions of this work are: • Development of the "Transient Hyperbolic" time-rate relation-this relation includes an early AND a late-time hyperbolic behavior, as well as a logistic transition function. • Application of the "Transient Hyperbolic" time-rate relation to modeling completion heterogeneity in a MFHW by a superposition of divisions that have differing durations of linear flow. • Application of the "Transient Hyperbolic" time-rate relation to several field cases-specifically: tight gas, shale gas, and "liquids-rich" shale cases.
Engineers and leaders who must decide on development strategies for unconventional resource projects face a challenging design problem. While we must make decisions on well and completion design, including well-spacing in three-dimensions, the complexity of the physical system and the interactions between these parameters can become overwhelming. The technical optimization problem can be difficult; however, asking the right questions can make the business decision clearer than it first appears. The typical approach to design optimization problems is to build models, with a tendency toward including an ever-increasing number of parameters to describe the system in exhaustive detail. However, our uncertainty in the model parameters often makes it impossible to identify the true optimum. In this work, we focus instead on reducing the number of model parameters and capturing the impact of these critical uncertainties on our business decisions. This allows us to answer the right questions in order to define and choose the best well-spacing strategy. For well-spacing optimization, a critical uncertainty is the relationship between the chosen well-spacing and the potential well-performance degradation, in terms of estimated ultimate recovery (EUR) and initial production (IP). Rather than attempting to describe fracture geometry and well interference from a mechanistic standpoint, we introduce a lumped parameter, the shared reservoir (SR) factor, to account for this complex relationship. The parameter distribution may be calibrated to (a) well results in a play, (b) well results in carefully selected analogue plays, or (c) simulated well results from probabilistic analyses. An example of a Monte-Carlo simulation using the uncertainty of the SR factor, as well as the mean EUR and IP, highlights the utility of the method. We also illustrate how the spacing decision impacts key risk and financial metrics, including the expected monetary value of the project, the probability of regretting the decision, and the probability of commercial success of the project. The shared reservoir factor is proposed to capture the complex relationships between the well-spacing decision and the EUR and IP that result from this decision. Using the shared reservoir factor, we can develop simple stochastic models to clarify an otherwise frustratingly complex optimization problem.
Production data analysis methods are reliant upon the quality of data — particularly the diagnostic aspects of these analyses. In particular, data artifacts that obscure the view of the reservoir signal limit the reliability of interpretation and subsequent analysis by even the most experienced evaluator. It is fact to state that the current algorithms used in the industry often amplify noise and are inherently unstable in the presence of outliers. This work proposes to significantly improve data processing using modern algorithms for data filtering and statistical (Bayesian) methods for estimated derivative functions in the presences of random and biased noise. Put simply, this work will improve the ability to interpret time-rate-pressure data and increase confidence in the diagnosis of well performance behavior. The cost of computational power available in common desktop computers, measured in dollars per floating point operation, has decreased more in the last ten years than in any preceding ten-year period. The level of computation available has inspired an immense amount of interest in the development of free open source software projects to make use of the newly available resources and enable access to legacy code for numerical processing with the features of modern programming languages. Having access to low-level libraries abstracted to high-level languages provides the capability to apply complex statistics with (relatively) simple computer modules. In turn, this capability gives rise to a new concept; specifically, the ability to create modules/programs that reason about our intentions, as opposed to our implementations. In this work, we illustrate two applications: First, outlier filtering that makes no assumptions about the distribution or density of the data, i.e. what is or is not an outlier; only that a regression of a known model to the data must be statistically robust (not influenced by outliers). We evaluate the method by application using publicly available data to every horizontal well in the Midland and Delaware basins put on production since January 2013. Second, the evaluation of data derivatives by Bayesian inference. The use of Bayesian methods provides two advantages: 1) a distribution of non-unique results enables us to visualize uncertainty due to data quality, and 2) automatic hyperparameter optimization, which in this case is REGULARIZATION for smoothness of the derivative. We evaluate this methodology by com-parison to the existing methods for cases in the petroleum literature, and for field cases using permanent-downhole gauge data. We observe that the data filtering method works well and applies generally to any data set for which we have an a priori model assumption (e.g., a function, y(x)). In a practical sense, we can incorporate the regression of the assumed model into the filtering algorithm or use a simpler model for filtering and pass the output along for further processing. We observe that the derivative computation method yields smoother, less noisy derivatives than existing sampling and smoothing methods. We propose new methodologies for data smoothing and derivative analysis and illustrate the application of these methods to the petroleum literature. These new tools are relevant for any engineer performing production (and pressure) data analysis — whether empirical production decline analyses or physics-based rate-transient/pressure-transient analyses.
Choosing the best projects to fund is easy. Our challenge is trying to weigh the complexities of the projects that are at the threshold for funding in a capital-constrained environment. Even though all the projects under consideration might be economically viable on a stand-alone basis, we seek to determine which suite of capital funding options best meets our long-term goals. Apache Corporation maintains a broad inventory of operating assets and investment opportunities, composed of projects with considerable variability with regard to uncertainty in potential performance and risk of financial loss. The opportunity suite consists of projects with highly varying capital investment patterns and production profiles. Further, the project portfolio faces varying exposure to commodity markets, petroleum fiscal regimes, and aboveground operational and sovereign risks. Apache has found that implementing and sustaining a portfolio process requires technical solutions and application of best practices for three critical elements: Production Forecasting, Project Modeling & Economic Evaluation, and Portfolio Management & Decision Making. A robust portfolio process for investment decision-making requires organizational alignment around a shared vision for value recognition and a rigorous, disciplined approach to capital allocation. Value recognition is critically dependent on establishing internal practices and standards for consistent application of methods and tools in characterizing cash flow potential from the suite of investment opportunities. Apache’s implementation of a portfolio process was undertaken as a sequence of initiatives with clear deliverables focused on building critical capabilities and infrastructure within key groups, while driving organizational alignment around the process. Major steps in the change management effort included: creation of a portfolio modeling groupshifting focus from well characterizations to project characterizationssoftware and systems investmentsorganizational alignmentexecutive adoption Over the past decade, Apache has undertaken a major shift in strategic focus toward organic growth, by placing significant investments in North American unconventional resource plays. The worldwide portfolio of opportunities became increasingly complex in terms of demand for capital, pattern of cash flows, and uncertainty in outcomes. Comparing and contrasting the performance potential and limitations of the opportunity set, in the context of corporate goals and constraints, became increasingly difficult with standard ranking methodologies. Apache’s portfolio reached the enviable state of possessing more projects in inventory than capital available for funding. With each business unit (BU) focused on extending and optimizing its own opportunity set the overall, integrated value for the corporation was not fully recognized. A realignment of processes and a shift in cultural perspective emphasizing an integrated whole was needed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.