2017
DOI: 10.14569/ijacsa.2017.081203
|View full text |Cite
|
Sign up to set email alerts
|

State-of-the-Art and Open Challenges in RTS Game-AI and Starcraft

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…As computers advanced in the early 1940s, programmers created novel virtual worlds and unexpected human-computer interaction methods. Later on, thanks to technological developments like GPUs [1] and TPUs [2] and the innovation in neural networks [3], It is possible to use artificial intelligence significantly in games [4]. Because of this, Atari games have used reinforcement learning algorithms and strategies to train agents.…”
Section: Introductionmentioning
confidence: 99%
“…As computers advanced in the early 1940s, programmers created novel virtual worlds and unexpected human-computer interaction methods. Later on, thanks to technological developments like GPUs [1] and TPUs [2] and the innovation in neural networks [3], It is possible to use artificial intelligence significantly in games [4]. Because of this, Atari games have used reinforcement learning algorithms and strategies to train agents.…”
Section: Introductionmentioning
confidence: 99%
“…However, in more complex tasks, such as real-time strategy (RTS) games, RL suffers from long training times, requiring hundreds of millions of training steps [24], which can take up to weeks to complete depending on the performance of the hardware. Due to the incredibly large state and action spaces and sparse reward nature of such tasks, vanilla RL agents with no further optimisations are incapable of mastering RTS games within a reasonable time [2]. As a result, shortening the training time has the potential to significantly benefit RL agents and the outcomes in complex environments.…”
Section: Introductionmentioning
confidence: 99%
“…However, in more complex tasks, such as real-time strategy (RTS) games, RL suffers from long training times, requiring hundreds of millions of training steps [20], which can take up to weeks to complete depending on the performance of the hardware. Due to the incredibly large environment and sparse reward nature of such tasks, RL agents with no further optimisations are incapable of mastering RTS games within a reasonable time [2]. As a result, shortening the training time has the potential to significantly benefit RL agents and the outcomes in complex environments.…”
Section: Introductionmentioning
confidence: 99%