We investigate the problem of learning to predict moves in the board game of Go from game records of expert players. In particular, we obtain a probability distribution over legal moves for professional play in a given position. This distribution has numerous applications in computer Go, including serving as an efficient stand-alone Go player. It would also be effective as a move selector and move sorter for game tree search and as a training tool for Go players. Our method has two major components: a) a pattern extraction scheme for efficiently harvesting patterns of given size and shape from expert game records and b) a Bayesian learning algorithm (in two variants) that learns a distribution over the values of a move given a board position based on the local pattern context. The system is trained on 181,000 expert games and shows excellent prediction performance as indicated by its ability to perfectly predict the moves made by professional Go players in 34% of test positions.
The prediction accuracy of any learning algorithm highly depends on the quality of the selected features; but often, the task of feature construction and selection is tedious and nonscalable. In recent years, however, there have been numerous projects with the goal of constructing general-purpose or domain-specific knowledge bases with entity-relationshipentity triples extracted from various Web sources or collected from user communities, e.g., YAGO, DBpedia, Freebase, UMLS, etc. This paper advocates the simple and yet far-reaching idea that the structured knowledge contained in such knowledge bases can be exploited to automatically extract features for general learning tasks. We introduce an expressive graph-based language for extracting features from such knowledge bases and a theoretical framework for constructing feature vectors from the extracted features. Our experimental evaluation on different learning scenarios provides evidence that the features derived through our framework can considerably improve the prediction accuracy, especially when the labeled data at hand is sparse.
We present a new global scale-up technology for calculating effective permeability and/or transmissibility. Using this innovative technology, we apply global flow solutions to improve scale-up accuracy. Global scale-up was proposed in the 1980s, and its benefits are well-described in the literature. However, global scale-up has not been adopted by the industry due to significant technical challenges in its application to real reservoir models. These models are often characterized by complex features like faults, pinch-outs, and isolated geobodies. Here, we overview several novel technologies that we have developed to overcome these difficulties. Numerical examples applying global scale-up to several reservoir models are presented. Comparisons with local scale-up methods currently used in the industry are made. The examples demonstrate that our new global scale-up technology leads to significant improvements in scale-up accuracy. In particular, when applied to challenging models with complicated fine-scale connectivity, the global scale-up method preserves the fine-scale connectivity more accurately than local scale-up methods. Sometimes, dramatic differences are seen. Moreover, we note that global scale-up can be especially effective when used in conjunction with unstructured grids. Accurate scale-up is a critical link between fine-scale geologic descriptions and coarse-scale reservoir simulation models used for development planning and reservoir management. Predictive reservoir models that are consistent with geologic and production data gathered at different scales are critical for these tasks. Global scale-up is a promising technique for building more accurate reservoir models. Introduction Reservoir modeling is a critical component in development planning and production management of oil and gas fields. The ultimate goal of reservoir modeling is to aid the decision making process throughout all stages of field life. During early field development, reservoir models are used to assess the risk and uncertainty in the field performance based on limited data. Once production begins, reservoir models are periodically refined or updated based on reservoir surveillance data. The updated models are then used for making field management decisions, such as in-fill drilling. For mature fields, accurate reservoir models are required to evaluate opportunities in enhanced oil recovery. A significant challenge in building predictive reservoir models is ensuring that the models are consistent with data collected at multiple scales. Reservoir models built at different scales for different purposes need to be consistent with each other and all available data. Such consistency is important to assess uncertainty and to understand field geology. The latter is typically achieved through history matching reservoir models using both field production and 4D seismic data along with other data. Although models that match field performance are non-unique, those that are consistent with both the underlying geology and the measured data provide better predictability [1]. This paper focuses on the consistent modeling of permeability at different scales. It is well-known that permeability measured from core analysis, well logs, and well tests can be very different (e.g., [2]), because each measurement probes different spatial and time scales. The scale difference can be orders of magnitude. For example, core plug measurements are conducted at centimeter scale, whereas well tests measure permeability at a spatial scale of 10 - 103 meters. Consistently incorporating all these data into a reservoir model with cell size of 50 to 100s of meters is non-trivial. Unlike porosity and other volumetric properties, permeabilities at different scales are not related using simple averaging equations. Single-phase flow-based scale-up is commonly used in the industry to link permeability data and models at different scales. Within this methodology, coarse scale permeability is calculated from numerical flow solutions on fine scale reservoir models; see [3–7] for comprehensive reviews. In this paper, we present a single-phase scale-up technology using global flow solutions and its application to reservoir modeling.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.