2014
DOI: 10.1101/sqb.2014.79.024851
|View full text |Cite
|
Sign up to set email alerts
|

Causal Model Comparison Shows That Human Representation Learning Is Not Bayesian

Abstract: How do we learn what features of our multidimensional environment are relevant in a given task? To study the computational process underlying this type of "representation learning," we propose a novel method of causal model comparison. Participants played a probabilistic learning task that required them to identify one relevant feature among several irrelevant ones. To compare between two models of this learning process, we ran each model alongside the participant during task performance, making predictions re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
7
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 33 publications
1
7
0
Order By: Relevance
“…In this study of reinforcement-based timing a simple selective value maintenance strategy reduced information load and facilitated the transition to exploitation, further work is needed to evaluate alternative information maintenance strategies and their putative neural implementations. At the same time, our findings broadly align with emerging evidence that the implementation of optimal inference is limited by representational capacity (2) and that heuristic approaches to reinforcement learning are more likely to be effective in complex environments (20,57).…”
Section: Discussionsupporting
confidence: 85%
See 1 more Smart Citation
“…In this study of reinforcement-based timing a simple selective value maintenance strategy reduced information load and facilitated the transition to exploitation, further work is needed to evaluate alternative information maintenance strategies and their putative neural implementations. At the same time, our findings broadly align with emerging evidence that the implementation of optimal inference is limited by representational capacity (2) and that heuristic approaches to reinforcement learning are more likely to be effective in complex environments (20,57).…”
Section: Discussionsupporting
confidence: 85%
“…As we illustrate below, such selective maintenance trades off a high-fidelity representation of all available rewards for the opportunity to exploit the best response time (cf. 20).…”
Section: -Anatole Francementioning
confidence: 99%
“…Evaluating whether animals follow optimal foraging principles faces several challenges. This is because, in experimental settings that approximate natural foraging, many variables such as energy, time, and opportunity costs are difficult to measure, and competing models can generate qualitatively similar predictions [10][11][12]. These problems can be mitigated in reward foraging tasks that require subjects to initiate decisions from identical starting points to equally distant options.…”
Section: Introductionmentioning
confidence: 99%
“…We have previously suggested that this process of “representation learning” depends on the interaction between RL and selective attention. Notably, attention filters what we learn about, and this attention filter is itself dynamically modulated by reinforcement (Geana & Niv, 2015; Niv et al, 2015; Wilson & Niv, 2012). Here we ask whether older adults differ from younger adults in their selective attention strategies and/or in the efficacy of their ability to learn from feedback in a multidimensional environment in which only some dimensions are relevant for the task at hand.…”
mentioning
confidence: 99%