2011
DOI: 10.1609/aiide.v7i1.12433
|View full text |Cite
|
Sign up to set email alerts
|

Learning Probabilistic Behavior Models in Real-Time Strategy Games

Abstract: We study the problem of learning probabilistic models of high-level strategic behavior in the real-time strategy (RTS) game StarCraft. The models are automatically learned from sets of game logs and aim to capture the common strategic states and decision points that arise in those games. Unlike most work on behavior/strategy learning and prediction in RTS games, our data-centric approach is not biased by or limited to any set of preconceived strategic concepts. Further, since our behavior model is based on th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 43 publications
(20 citation statements)
references
References 7 publications
0
20
0
Order By: Relevance
“…Naturally, PGMs have seen an increased popularity in the RTS domain over the past few years. Hidden Markov Models (a simple type of PGM) have been used to learn high-level strategies from data (Dereszynski et al 2011) and PGMs have been used to predict the opponent's opening strategy (Synnaeve and Bessiere 2011b) and to guess the order that the opponent is building units in Synnaeve and Bessiere (2011a). The same research group has also developed models that allow their RTS agent to make decisions about where on the map it should send units to attack and with what kinds of units (Synnaeve and Bessiere 2012a).…”
Section: Bayesian Networkmentioning
confidence: 99%
“…Naturally, PGMs have seen an increased popularity in the RTS domain over the past few years. Hidden Markov Models (a simple type of PGM) have been used to learn high-level strategies from data (Dereszynski et al 2011) and PGMs have been used to predict the opponent's opening strategy (Synnaeve and Bessiere 2011b) and to guess the order that the opponent is building units in Synnaeve and Bessiere (2011a). The same research group has also developed models that allow their RTS agent to make decisions about where on the map it should send units to attack and with what kinds of units (Synnaeve and Bessiere 2012a).…”
Section: Bayesian Networkmentioning
confidence: 99%
“…Which policies are used can be determined by making the bot aware of certain aspects of the opponent. For example, if you have determined that the opponent is implementing strategy A, and you have previously determined that strategy B is a good counter to A, then you can start running strategy B (Dereszynski et al 2011). Likewise, UAlbertaBot (code.google.com/p/ualbertabot), which won last year's AI-IDE StarCraft AI competition, currently uses simulation results to determine if it should engage the opponent in combat scenarios or not.…”
Section: Motivationmentioning
confidence: 99%
“…Although there has been a recent trend of using StarCraft replays as data for machine learning tasks (Weber and Mateas 2009) (Synnaeve and Bessière 2011) (Dereszynski et al 2011), little work has been regarding state evaluation in RTS games. (Yang, Harrison, and Roberts 2014) tries to predict game outcomes in Massively Online Battle Arena (MOBA) games, a different but similar genre of game to RTS.…”
Section: Introductionmentioning
confidence: 99%
“…They found it difficult to determine opponent strategy in the early game. Dereszynski et al (Dereszynski et al 2011) successfully used a statistical model for predicting opponent behavior and strategy in StarCraft.…”
Section: Related Workmentioning
confidence: 99%