2019
DOI: 10.1007/978-3-030-24268-8_43
|View full text |Cite
|
Sign up to set email alerts
|

Playing First-Person-Shooter Games with A3C-Anticipator Network Based Agents Using Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…The most common method of quantitative comparison is to use the obtained reward to compare two or more trained agents; we found find this used in [10][11][12][13][14][15][16]. It is indeed a reliable metric to use in a local environment; however, it leads to results that cannot be compared to anything outside of that paper as the reward will differ from project to project.…”
Section: Metricsmentioning
confidence: 99%
See 2 more Smart Citations
“…The most common method of quantitative comparison is to use the obtained reward to compare two or more trained agents; we found find this used in [10][11][12][13][14][15][16]. It is indeed a reliable metric to use in a local environment; however, it leads to results that cannot be compared to anything outside of that paper as the reward will differ from project to project.…”
Section: Metricsmentioning
confidence: 99%
“…The A3C algorithm was tested against a A3C-Anticipator network in [13] but it was concluded that there was not a significant difference between the two.…”
Section: Asynchronous Advantage Actor-criticmentioning
confidence: 99%
See 1 more Smart Citation
“…Gamification. A significant feature of RL agents lies in the simulation of the task, which often revolves around learning a game (Mnih et al, 2013;Sun et al, 2019). These agents require a reward function in order to train their algorithms correctly, which means that any task an individual creates must center around a gamified component.…”
Section: Figure 3 Hat Experiments Platform Frameworkmentioning
confidence: 99%
“…The result emphasized the promising capabilities of DQN and DRQN in game AI, showcasing the potential for the DQN and DRQN algorithms to outperform human players when making decisions based solely on raw screen pixels. The work proposed in [26] addresses the limitations of common built-in game agents, which often rely on pre-written scripts and potentially unfair information. Instead, the paper focuses on utilizing deep learning and reinforcement learning methods to create game agents that make decisions more flexibly, akin to human players who rely solely on the game screen.…”
Section: Introductionmentioning
confidence: 99%