A statistically rigorous assessment of the effect of fracturing treatment chemical additives on well productivity was performed. The dataset for analysis consisted of over 4,500 slickwater-treated wells in the lower 48 US states. All wells were treated by a single service company within a 5-year period. The analysis focused on two distinct additives, namely, linear guar gels and surfactant-based flowback aids, in slickwater treatments. A method and workflow to quantify the effects of completion parameters on well productivity were developed in this work. The statistical t-test was used to assess the statistical significance of differences in aggregated production metrics between datasets based on the observation that production data were distributed log-normally. The proposed workflow addresses many common issues a reservoir engineer faces during data sourcing, preprocessing, evaluation, and interpretation and further highlights the importance of proper statistical approaches. The analysis results emphasize the benefits of proper job design even in relatively simple slickwater treatments.
Cross-domain data analysis is arguably the most important part of oilfield data analytics. While it enables holistic process optimization, it is also challenging to execute. Data are often scattered across different databases making it complex to join and may require multidomain expertise to properly analyze. Here, processing and analysis of data collected by three well construction business lines of the same service company were performed to establish a link between drilling fluid properties and drilling performance. The data engineering workflow starts by taking information from a single service company and combining that information about drilling operations, drill bits, and drilling fluids into a single dataset. Metadata including locations, operators, and wells are then mapped, and overlapping attributes are unified and reconciled. Data is further processed to extract relevant drilling performance metrics and drilling fluid properties and then labeled by well, section, and drilling run. The resultant data workflow enables detailed analysis, focusing on particular locations, drilling practices, hole conditions, and fluids. The joined, cleaned, and processed dataset includes information from thousands of wells drilled globally since 2016. The datasets from different sources differ in the level of detail, but are complementary to each other, providing a broader picture when merged. The data is organized and visualized on dashboards, enabling in-depth analysis through intuitive filtering on a variety of conditions. These conditions may include location, drilling run type, depth, used drill bits and tools, and drilling fluid type and properties. The main drilling performance metrics are distance drilled per run and run duration. These are used to calculate the run average rate of penetration (ROP). Reasons for pulling out of the hole (POOH) and risks for POOH are extracted from text comments of the daily drilling reports. This enables the tracking of abnormal run terminations due to drilling tool failures. It also enables tracking of wellbore integrity, and substandard drilling and hole conditioning practices, especially at section total depth (TD) or because of drilling fluid issues. Aggregated metrics of minimum, maximum, and median are used for high-level data evaluation. Statistical significance of effects and causality are analyzed in detail on selected cases. Based on the data, several examples of such analyses are created that focus on the effects of water-based fluid vs. oil-based fluid, on drilling performance in the major oil fields in the United States. Holistic analysis of the effects of drilling fluids on drilling performance becomes possible through the well construction cross-domain data fusion. The developed workflow enables analysis of drilling fluid-related big data, covering tens of thousands of wells globally. The analysis results are expected to improve drilling efficiency and reliability and ultimately reduce operators' total well expenditures.
Ensuring a proper apple to apple comparison is a challenge in drilling performance evaluation. When assessing the effect of a particular drilling technology, such as bit, bottomhole assembly (BHA) or mud type, on the rate of penetration (ROP) or other drilling performance criteria, all other factors must be fixed to truly isolate the effect. Traditionally, performance evaluation starts with manual identification of reasonably similar entities, such as drilling runs or well sections by means of numerous selection criteria; e.g., location, depths, inclinations, drilling conditions, tools, etc. The selected drilling performance metrics are then compared using statistical analysis techniques with various extents of thoroughness. Such analyses are laborious and are usually limited to just a handful of cases due to practical reasons and time constraints. Furthermore, the analyses are difficult to apply to large data sets of hundreds or thousands of wells, and there is always a risk of missing an important combination of factors where the effect is important. Therefore, conclusions based on these analyses may well be insufficiently justified or even confirmation biased, leading to suboptimal technical and business decisions. This paper presents a combined machine learning and statistical analysis workflow addressing these challenges. The workflow a) discovers similar entities (wells, intervals, runs) in big datasets; b) extracts subsets of similar entities (i.e., "apples") for evaluation; c) applies rigorous statistical tests to quantify the effect (mud type, BHA type, bit type) on a metric (ROP, success rate) and its statistical significance; and, finally, d) returns information on areas, sets of conditions where the effect is pronounced (or not). In the statistical analysis workflow, the user first specifies the drilling technology of interest and drilling performance metrics, and then defines factors and parameters to be fixed to better isolate the effect of the drilling technology. The historical data on thousands of entities are then preprocessed, and the entities are clustered by similarities in the multitude of factors by the k-means algorithm. Statistical tests are performed automatically on each cluster, quantifying the magnitude of technology effect on performance criteria, and calculating p-values as the measure of statistical significance of the effect. The results are presented in a series of clustering observations that summarize the effects and allow for zooming into the clusters to review drilling parameters and to perform further in-depth analysis, if necessary. All steps of the workflow are presented in this paper, including data processing details, and reasons for selecting specific clustering algorithms and statistical tests. Several examples of the successful applications of the workflow to actual drilling data for thousands of wells are provided, focusing on the effects of BHA, steering tools, and drilling muds on drilling performance. This unique approach can be used to improve other drilling performance evaluation workflows.
Flowback aid surfactants are key components for optimal water recovery after a fracturing treatment. However, comprehensive guidelines and discovery workflows for selecting an optimal flowback aid are lacking. A suite of carefully designed high-throughput screening tests coupled with industry performance tests was developed and applied to over 50 surfactants representing 12 chemical classes to create a portfolio of formation-specific flowback aids. Experiments were carried out in a two-stage approach that focuses both on intrinsic surfactant properties and on interactions with the reservoir environment. In the first stage, promising surfactant candidates were selected using high-throughput testing of surface tension, critical micelle concentration, cloud point, mineral adsorption, emulsification, and heat-aged stability. In the second stage, efficacy of surfactant candidates was assessed by testing fracturing fluid cleanup from selected shale matrices. We observed trends in critical performance parameters with simple engineering design criteria, such as hydrophile/lipophile balance (HLB), or field conditions, such as salinity. All surfactants tested showed efficient surface tension reduction and minimal effect of salinity on surface tension. Critical micelle concentrations, however, decreased with increasing salinity, and the effect was more pronounced for surfactants with a greater HLB number. Further candidate differentiation for the optimal surfactant package was achieved by testing adsorption on shale minerals. The validity of the approach was confirmed by testing cleanup from sand and shale matrices with final product candidates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.