Abstract-We investigate the use of genetic algorithms to evolve AI players for real-time strategy games. To overcome the knowledge acquisition bottleneck found in using traditional expert systems, scripts, or decision trees we evolve players through co-evolution. Our game players are implemented as resource allocation systems. Influence map trees are used to analyze the game-state and determine promising places to attack, defend, etc. These spatial objectives are chained to non-spatial objectives (train units, build buildings, gather resources) in a dependency graph. Players are encoded within the individuals of a genetic algorithm and co-evolved against each other, with results showing the production of strategies that are innovative, robust, and capable of defeating a suite of hand-coded opponents.
We attack the problem of game balancing by using a coevolutionary algorithm to explore the space of possible game strategies and counter strategies. We define balanced games as games which have no single dominating strategy. Balanced games are more fun and provide a more interesting strategy space for players to explore. However, proving that a game is balanced mathematically may not be possible and industry commonly uses extensive and expensive human testing to balance games. We show how a coevolutionary algorithm can be used to test game balance and use the publicly available continuous state, capture-the-flag CaST game as our testbed. Our results show that we can use coevolution to highlight game imbalances in CaST and provide intuition towards balancing this game. This aids in eliminating dominating strategies, thus making the game more interesting as players must constantly adapt to opponent strategies.
Abstract-We use a genetic algorithm to explore the space of pathfinding algorithms in Lagoon, a 3D naval real-time strategy game and training simulation. To aid in training, Lagoon tries to provide a rich environment with many agents (boats) that maneuver realistically. A*, the traditional pathfinding algorithm in games is computationally expensive when run for many agents and A* paths quickly lose validity as agents move. Although there is a large literature targeted at making A* implementations faster, we want believability and optimal paths may not be believable. In this paper we use a genetic algorithm to search the space of network search algorithms like A* to find new pathfinding algorithms that are near-optimal, fast, and believable. Our results indicate that the genetic algorithm can explore this space well and that novel pathfinding algorithms (found by our genetic algorithm) quickly find near-optimal, more-believable paths in Lagoon.
In this article we present a computational approach to developing effective training systems for virtual simulation environments. In particular, we focus on a Naval simulation system, used for training of conning officers. The currently existing training solutions require multiple expert personnel to control each vessel in a training scenario, or are cumbersome to use by a single instructor. The inability of current technology to provide an automated mechanism for competitive realistic boat behaviors thus compromises the goal of flexible, anytime, anywhere training. In this article we propose an approach that reduces the time and effort required for training of conning officers, by integrating novel approaches to autonomous control within a simulation environment. Our solution is to develop intelligent, autonomous controllers that drive the behavior of each boat. To increase the system's efficiency we provide a mechanism for creating such controllers, from the demonstration of a navigation expert, using a simple programming interface. In addition, our approach deals with two significant and related challenges: the realism of behavior exhibited by the automated boats and their real-time response to changes in the environment. In this article, we describe the control architecture we developed that enables the real-time response of boats and the repertoire of realistic behaviors we designed for this application. We also present our approach for facilitating the automatic authoring of training scenarios and we demonstrate the capabilities of our system with experimental results.
Abstract-Behavior based architectures have many parameters that must be tuned to produce effective and believable agents. We use genetic algorithms to tune simple behavior based controllers for predators and prey. First, the predator tries to maximize area coverage in a large asymmetric arena with a large number of identically tuned peers. Second, the GA tunes the predator against a single prey agent. Then, we tune two predators against a single prey. The prey evolves against a default predator and an evolved predator. The genetic algorithm finds high-performance controller parameters after a short length of time and outpaces the same controllers hand tuned by human programmers after only a small number of evaluations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.