A wide variety of strategies have been used to create agents in the growing field of real-time strategy AI. However, a frequent problem is the necessity of hand-crafting competencies, which becomes prohibitively difficult in a large space with many corner cases. A preferable approach would be to learn these competencies from the wealth of expert play available. We present a system that uses the Generalized Sequential Pattern (GSP) algorithm from data mining to find common patterns in StarCraft:Brood War replays at both the micro- and macro-level, and verify that these correspond to human understandings of expert play. In the future, we hope to use these patterns to learn tasks and goals in an unsupervised manner for an HTN planner.
While Hierarchical Task Networks are frequently cited as flexible and powerful planning models, they are often ignored due to the intensive labor cost for experts/programmers, due to the need to create and refine the model by hand. While recent work has begun to address this issue by working towards learning aspects of an HTN model from demonstration, or even the whole framework, the focus so far has been on simple domains, which lack many of the challenges faced in the real world such as imperfect information and real-time environments. I plan to extend this work using the domain of real-time strategy (RTS) games, which have gained recent popularity as a challenging and complex domain for AI research.
One of the major weaknesses of current real-time strategy (RTS) game agents is handling spatial reasoning at a high level. One challenge in developing spatial reasoning modules for RTS agents is to evaluate the ability of a given agent for this competency due to the inevitable confounding factors created by the complexity of these agents. We propose a simplified game that mimics spatial reasoning aspects of more complex games, while removing other complexities. Within this framework, we analyze the effectiveness of classical reinforcement learning for spatial management in order to build a detailed evaluative standard across a broad set of opponent strategies. We show that against a suite of opponents with fixed strategies, basic Q-learning is able to learn strategies to beat each. In addition, we demonstrate that performance against unseen strategies improves with prior training from other distinct strategies. We also test a modification of the basic algorithm to include multiple actors, to speed learning and increase scalability. Finally, we discuss the potential for knowledge transfer to more complex games with similar components.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.