BackgroundFunctional analyses of genomic data within the context of a priori biomolecular networks can give valuable mechanistic insights. However, such analyses are not a trivial task, owing to the complexity of biological networks and lack of computational methods for their effective integration with experimental data.ResultsWe developed a software application suite, NetWalker, as a one-stop platform featuring a number of novel holistic (i.e. assesses the whole data distribution without requiring data cutoffs) data integration and analysis methods for network-based comparative interpretations of genome-scale data. The central analysis components, NetWalk and FunWalk, are novel random walk-based network analysis methods that provide unique analysis capabilities to assess the entire data distributions together with network connectivity to prioritize molecular and functional networks, respectively, most highlighted in the supplied data. Extensive inter-operability between the analysis components and with external applications, including R, adds to the flexibility of data analyses. Here, we present a detailed computational analysis of our microarray gene expression data from MCF7 cells treated with lethal and sublethal doses of doxorubicin.ConclusionNetWalker, a detailed step-by-step tutorial containing the analyses presented in this paper and a manual are available at the web site http://netwalkersuite.org.
Upstream oil and gas industry services work to deliver success throughout the life cycle of the reservoir. However, conventional sources of oil and gas are declining; hence, operators are increasingly turning their attention to unexplored and underdeveloped regions, such as high-pressure/high-temperature (HP/HT) and deepwater areas, as well as working to increase recoveries in mature fields. As reservoirs become more complex and drilling operations become more expensive, there is a growing need to reduce inefficiencies and costs. Petroleum engineers are increasingly using optimized formation evaluation techniques, software with three-dimensional (3D) visualization, and multidisciplinary data interpretation techniques. In addition, data from new downhole equipment provides reliable, real-time information about downhole conditions. With these improved techniques and better data, operators can model, predict, and control their operations better in real-time, thereby reducing inefficiencies and cost. However, this massive integration of varied data moving in higher volumes and at increased speeds has created an increasing demand on the computation and use of actionable and predictive data-driven analytics. Furthermore, these analytics must work in real-time to quickly discover the important critical data attributes and features for use in forecasting. Attribute importance is a well-known statistical technique used to identify critical attributes and features within a set of attributes that could impact a specific target. The benefits of deriving attribute importance include improved data set comprehension, reduced analysis efforts, and reduced time and computational resources required for actionable predictive analytics. Furthermore, attribute importance aids in understanding the attribute space and interactions, and it enables dimensionality reduction; thereby, leading to comprehendible, accurate, "parsimonious" (i.e., simple) models that lead to actionable predictive analytics. Today, numerous standard and custom techniques—each having their own strengths and weaknesses—are available for performing attribute importance. Each technique applies a different function to evaluate the importance of an attribute and score it to produce a ranked subset of attributes. Hence, it is possible (and is fairly common) to arrive at different subsets of important attributes based on the choice and configurations of the various techniques. To solve this problem, a feasible and effective framework-based approach is proposed that uses multiple attribute importance techniques and then intelligently fuses the results of these methods to arrive at a fused subset of important attributes. This paper describes a framework/"ensemble" (i.e., combined) approach called Segmented Attribute Kerneling (SAK). In addition, it discusses the results obtained from applying this approach on a dataset incorporating real-time drilling surface data logs and drilling parameters. This approach runs multiple attribute importance algorithms simultaneously, finds the intersecting subset of important attributes across the multiple techniques, and then outputs a consolidated ranked set. In addition, the method identifies and presents a ranked subset of the attributes excluded from the union. This paper compares the results of this approach to a single-approach technique based on the output of the predictive models.
Heavy oil has gained significant attention and importance recently because of a multitude of reasons—growing demand of oil from developing economies, declining availability of easily recoverable or "conventional" oil, and significant advances in required technology. Even though the current estimates of heavy oil in place are three times that of conventional oil, they have only recently become economically viable because of sustained high oil prices. Improved technology has also driven down the recovery risk to minimal levels. The earliest recovery methods for heavy oil were largely cyclic stimulation, with steamflooding gaining acceptance in the 1970s. Despite other thermal and non-thermal recovery methods for heavy oil, steamflooding remains the most widely used technology. Current production by steamfloods alone totals more than 1.1 million BOPD. Previous studies have established how steamfloods are affected by parameters, such as rock properties, oil composition, degree of steam override, sweep efficiency, steam quality, and steam injection rate. However, the capital-intensive nature and low profit margins of the steamfloods mean that each field development decision is crucial and the oil recovery and margins are much more susceptible to uncertainties in oil price, well performance, facility costs, and subsurface parameters. While studies have been performed to corroborate the effect of subsurface parameters and economic uncertainties separately, there has been little advancement in terms of coupling all of them together in one unified study. In this paper, the effects of uncertainties on project net present value (NPV) are studied by coupling numerical reservoir simulation; a design- of-experiments based approach to handle uncertainty, an established economic model, and a commercial optimizing tool to determine the optimal field operating variables.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.