Proceedings of the 2010 IEEE Conference on Computational Intelligence and Games 2010
DOI: 10.1109/itw.2010.5593329
|View full text |Cite
|
Sign up to set email alerts
|

Modular Reinforcement Learning architectures for artificially intelligent agents in complex game environments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2011
2011
2021
2021

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 7 publications
0
7
0
Order By: Relevance
“…that it doesn't adequately balance a higher ES against a longer ETB-simulations show that frequently switching the high-level strategy is problematic (cf. also [8]). Once these enhancements are complete, we will release the various agents and game simulation environment in an open source repository.…”
Section: Discussionmentioning
confidence: 87%
See 2 more Smart Citations
“…that it doesn't adequately balance a higher ES against a longer ETB-simulations show that frequently switching the high-level strategy is problematic (cf. also [8]). Once these enhancements are complete, we will release the various agents and game simulation environment in an open source repository.…”
Section: Discussionmentioning
confidence: 87%
“…As we mentioned earlier, the MCTS approach used in [12] and [16] and the RL approach of Pfeiffer's [10], [11] all show that training must build upon what is already a complex prior strategy that incorporates sophisticated and domain-specific reasoning. Hanna et al [8] get the same result for the Fungus Eater game: the models learning best and quickest use the most hand-coded knowledge and most refined architecture for subdividing the task in a domain-specific way. The Fungus Eater game has a large state space, but relatively few actions compared to Settlers, and it is a single agent environment so opponents' preferences don't count.…”
Section: Introductionmentioning
confidence: 80%
See 1 more Smart Citation
“…The latter scheme can be considered a set of formal equations according to 3and (2). Each color corresponds to one determination triplet, where:…”
Section: Formalization Of Plmentioning
confidence: 99%
“…AG therefore requires a particular component, called a control agent, that continuously feeds from player actions and game state and tweaks a set of game parameters so that player's experience metrics remain optimal. This can be considered a Reinforcement Learning situation (RL) [1,2], so classical machine learning approaches (ML) can be used. With the development of neural networks, in particular Recurrent Neural Networks (RNNs) that are well suited for sequential processing [3,4], new opportunities are offered to develop a game controller using machine learning techniques.…”
Section: Introductionmentioning
confidence: 99%