Abstract-The aim of General Game Playing (GGP) is to create intelligent agents that can automatically learn how to play many different games at an expert level without any human intervention. The traditional design model for GGP agents has been to use a minimax-based game-tree search augmented with an automatically learned heuristic evaluation function. The first successful GGP agents all followed that approach. In here we describe CADIAPLAYER, a GGP agent employing a radically different approach: instead of a traditional game-tree search it uses Monte-Carlo simulations for its move decisions. Furthermore, we empirically evaluate different simulation-based approaches on a wide variety of games; introduce a domain-independent enhancement for automatically learning search-control knowledge to guide the simulation playouts; and show how to adapt the simulation searches to be more effective in single-agent games. CADIAPLAYER has already proven its effectiveness by winning the 2007 and 2008 AAAI GGP competitions.
The aim of General Game Playing (GGP) is to create intelligent agents that can automatically learn how to play many different games at an expert level without any human intervention. One of the main challenges such agents face is to automatically learn knowledge-based heuristics in real-time, whether for evaluating game positions or for search guidance. In recent years, GGP agents that use Monte-Carlo simulations to reason about their actions have become increasingly more popular. For competitive play such an approach requires an effective search-control mechanism for guiding the simulation playouts. In here we introduce several schemes for automatically learning search guidance based on both statistical and reinforcement learning techniques. We compare the different schemes empirically on a variety of games and show that they improve significantly upon the current state-of-the-art in simulation-control in GGP. For example, in the chess-like game Skirmish, which has proved a particularly challenging game for simulation-based GGP agents, an agent employing one of the proposed schemes achieves 97% winning rate against an unmodified agent.
Effective search control is one of the key components of any successful simulation-based game-playing program. In General Game Playing (GGP), learning of useful search-control knowledge is a particularly challenging task because it must be done in real-time during online play. In here we describe the search-control techniques used in the 2010 version of the GGP agent CADIAPLAYER, and show how they have evolved over the years to become increasingly effective and robust across a wide range of games. In particular, we present a new combined search-control scheme (RAVE/MAST/FAST) for biasing action selection. The scheme proves quite effective on a wide range of games including chess-like games, which have up until now proved quite challenging for simulation-based GGP agents.
Alpha-Beta pruning is one of the most powerful and fundamental MiniMax search improvements. It was designed for sequential two-player zero-sum perfect information games. In this paper we introduce an Alpha-Beta-like sound pruning method for the more general class of “stacked matrix games” that allow for simultaneous moves by both players. This is accomplished by maintaining upper and lower bounds for achievable payoffs in states with simultaneous actions and dominated action pruning based on the feasibility of certain linear programs. Empirical data shows considerable savings in terms of expanded nodes compared to naive depth-first move computation without pruning.
General Game Playing (GGP) agents must be capable of playing a wide variety of games skillfully. Monte-Carlo Tree Search (MCTS) has proven an effective reasoning mechanism for this challenge, as is reflected by its popularity among designers of GGP agents. Providing GGP agents with the knowledge relevant to the game at hand in real time is, however, a challenging task. In this paper we propose two enhancements for MCTS in the context of GGP, aimed at improving the effectiveness of the simulations in real time based on in-game statistical feedback. The first extension allows early termination of lengthy and uninformative simulations while the second improves the action-selection strategy when both explored and unexplored actions are available. The methods are empirically evaluated in a state-of-the-art GGP agent and shown to yield an overall significant improvement in playing strength.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.