2019
DOI: 10.1101/779306
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Computational neural mechanisms of goal-directed planning and problem solving

Abstract: The question of how animals and humans can solve arbitrary problems and achieve arbitrary goals remains open. Model-based and model-free reinforcement learning methods have addressed these problems, but they generally lack the ability to flexibly reassign reward value to various states as the reward structure of the environment changes. Research on cognitive control has generally focused on inhibition, rule-guided behavior, and performance monitoring, with relatively less focus on goal representations. From th… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 71 publications
(61 reference statements)
0
3
0
Order By: Relevance
“…After training, the model can generate novel action sequences to achieve arbitrary goal states. Adapted from (26).…”
Section: Figurementioning
confidence: 99%
See 2 more Smart Citations
“…After training, the model can generate novel action sequences to achieve arbitrary goal states. Adapted from (26).…”
Section: Figurementioning
confidence: 99%
“…Subjects were placed in one of four starting states and had to traverse one or two states to achieve a goal, by retrieving a key and subsequently using it to unlock a treasure chest for a reward ( Figure 2B). (26,29). B) Treasure Hunt task Both the GOLSA model and the human fMRI subjects performed a simple treasure hunt task, in which subjects were placed in one of four possible starting locations, then asked to generate actions to reach any of the other possible locations.…”
Section: B)mentioning
confidence: 99%
See 1 more Smart Citation