2018
DOI: 10.1109/tciaig.2017.2669895
|View full text |Cite
|
Sign up to set email alerts
|

Combat Models for RTS Games

Abstract: Game tree search algorithms, such as Monte Carlo Tree Search (MCTS), require access to a forward model (or "simulator") of the game at hand. However, in some games such forward model is not readily available. This paper presents three forward models for two-player attrition games, which we call "combat models", and show how they can be used to simulate combat in RTS games. We also show how these combat models can be learned from replay data. We use STARCRAFT as our application domain. We report experiments com… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(7 citation statements)
references
References 9 publications
0
7
0
Order By: Relevance
“…Models with higher prediction accuracy were based on ensembles of decision trees, but made use of pre-filtered feature sets [9]. Similarly in [10], rule-based methods are used to create a forward model of combat models in StarCraft. These rule based methods have the benefit of learning very compact representations, sometimes only consisting of a few hundred rules to encompass the dynamics of a single game.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Models with higher prediction accuracy were based on ensembles of decision trees, but made use of pre-filtered feature sets [9]. Similarly in [10], rule-based methods are used to create a forward model of combat models in StarCraft. These rule based methods have the benefit of learning very compact representations, sometimes only consisting of a few hundred rules to encompass the dynamics of a single game.…”
Section: Literature Reviewmentioning
confidence: 99%
“…[14] try to anticipate timing, army composition and location of upcoming opponent attacks with a bayesian model that explicitly deals with uncertainty due to the fog of war. [15] do not deal with issues caused by partial information but demonstrate usage of a combat model (which is conditioned on both state and action) learned from replay data that can be used in Monte Carlo Tree Search.…”
Section: Related Workmentioning
confidence: 99%
“…These predictive models can be very useful for a StarCraft bot, but they do not directly determine what to produce during the game. Tactical decision making can benefit equally from combat forward models; Uriarte et al showed how such a model can be finetuned using knowledge learned from replay data [28].…”
Section: B Learning From Starcraft Replaysmentioning
confidence: 99%