2018
DOI: 10.1101/461129
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

From predictive models to cognitive models: Separable behavioral processes underlying reward learning in the rat

Abstract: Cognitive models are a fundamental tool in computational neuroscience, embodying in software precise hypotheses about the algorithms by which the brain gives rise to behavior. The development of such models is often largely a hypothesis-first process, drawing on inspiration from the literature and the creativity of the individual researcher to construct a model, and afterwards testing the model against experimental data. Here, we adopt a complementary data-first approach, in which richly characterizing and sum… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
38
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 17 publications
(40 citation statements)
references
References 59 publications
2
38
0
Order By: Relevance
“…Of interest is the fact that this combination of value learning and action selection function, though widely used, has been repeatedly shown to mis-estimate the shape of the choice probability curve in humans (Shteingart et al, 2013) and animals (Miller, Botvinick, et al, 2019) in the sorts of repeated decision-making tasks to which this family of models is best-suited -in particular, at the extremes. We return to this point below.…”
Section: The View From Reinforcement Learningmentioning
confidence: 99%
“…Of interest is the fact that this combination of value learning and action selection function, though widely used, has been repeatedly shown to mis-estimate the shape of the choice probability curve in humans (Shteingart et al, 2013) and animals (Miller, Botvinick, et al, 2019) in the sorts of repeated decision-making tasks to which this family of models is best-suited -in particular, at the extremes. We return to this point below.…”
Section: The View From Reinforcement Learningmentioning
confidence: 99%
“…Lastly, as discussed above, both models require "stickiness" in action choice. This stickiness has been reported in analyses of behavior across tasks and species, and is also called perseveration, choice history bias, and the law of exercise (Thorndike 1911;Ito and Doya 2009;Balcarras et al 2016;Miller, Botvinick, and Brody 2018;Urai et al 2019;Lak et al 2020;Gershman 2020;Lai and Gershman 2021). We find that this bias to repeat previous actions offers a parsimonious mechanism for adapting an existing action policy to novel environmental conditions.…”
Section: Stickiness Captures the Deviation Of Mouse Behavior From Optimalitymentioning
confidence: 77%
“…We next considered logistic regression, which has been used previously to describe rodent behavior in similar tasks (Tai et al 2012;Parker et al 2016;Donahue, Liu, and Kreitzer 2018;Miller, Botvinick, and Brody 2018), as an alternative model. Although this simpler model was shown to perform well at predicting the right and left choice of animals in these tasks, its ability to predict switches has not been evaluated.…”
Section: Logistic Regression With a Stochastic Policy Better Predicts Mouse Behaviormentioning
confidence: 99%
See 1 more Smart Citation
“…Previous studies have shown that a large subset of possible learning strategies can be approximated by logistic regression models that directly represent the influence of reward and choice history on future choices (Katahira 2015;Miller et al 2019). For this reason, in this study we explore the space of possible choice and outcome history effects instead of examining the correction's effect in the presence of individual learning algorithms.…”
Section: Correction In Misactionmentioning
confidence: 99%