DOI: 10.1007/978-3-540-73055-2_52
|View full text |Cite
|
Sign up to set email alerts
|

Strategies for Affect-Controlled Action-Selection in Soar-RL

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Publication Types

Select...
3
3
1

Relationship

4
3

Authors

Journals

citations
Cited by 12 publications
(16 citation statements)
references
References 5 publications
0
16
0
Order By: Relevance
“…Reinforcement learning is triggered when knowledge is inadequate to make a decision. SOAR was extended with emotions that affect learning [14].…”
Section: Biologically Inspired Architecturesmentioning
confidence: 99%
“…Reinforcement learning is triggered when knowledge is inadequate to make a decision. SOAR was extended with emotions that affect learning [14].…”
Section: Biologically Inspired Architecturesmentioning
confidence: 99%
“…Second, Soar allows many forms of planning, enabling a better comparison between affective control of planning versus forward internal simulation. We are currently investigating affect-based control techniques in Soar-RL (Hogewoning, Broekens, Eggermont, & Bovenkamp, 2007).…”
Section: Future Workmentioning
confidence: 99%
“…Connections of emotions with reinforcement learning have been explored by a number of authors (Moerland et al, 2017;Hogewoning et al, 2007;Marinier III and Laird, 2008;El-Nasr et al, 2000;Broekens et al, 2015). In traditional reinforcement learning (RL), an agent is tasked with both learning about his world (primarily the utility of situations), and of acting in the same world.…”
Section: Artificial Intelligencementioning
confidence: 99%
“…Exploration, on the other hand, is seen as something that could be guided by any number of (possibly affective) elements. For example, in (Hogewoning et al, 2007), higher valence (which is equivalent to reward being higher than expected, so things are going well) is used to push an agent to increased exploitation of current knowledge, whereas lower valence/reward (things are going worse than expected) pushes an agent to explore. This view is largely consistent with the affect-as-cognitive-feedback view (Huntsinger et al, 2014) where higher valence facilitates usage of existing mental constructs, whereas negative valence inhibits (and thus forces an agent to seek new solutions through exploration).…”
Section: Artificial Intelligencementioning
confidence: 99%