2021
DOI: 10.1016/j.jmp.2021.102602
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Bayesian models of reinforcement learning: Introduction and comparison to alternative methods

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 46 publications
0
5
0
Order By: Relevance
“…As a final technical comment, reinforcement learning model parameters were estimated using maximum likelihood techniques on individual subjects followed by model comparison. Future work could expand on this by using hierarchical Bayesian parameter estimation to reduce the variance around parameter estimates ( Piray et al, 2019 ; van Geen and Gerraty, 2021 ; Lee and Newell, 2011 ). However, choosing prior distributions within the hierarchical Bayesian approach is not trivial and may not work for all of the models tested in this study.…”
Section: Discussionmentioning
confidence: 99%
“…As a final technical comment, reinforcement learning model parameters were estimated using maximum likelihood techniques on individual subjects followed by model comparison. Future work could expand on this by using hierarchical Bayesian parameter estimation to reduce the variance around parameter estimates ( Piray et al, 2019 ; van Geen and Gerraty, 2021 ; Lee and Newell, 2011 ). However, choosing prior distributions within the hierarchical Bayesian approach is not trivial and may not work for all of the models tested in this study.…”
Section: Discussionmentioning
confidence: 99%
“…The present solution of a more comprehensive yet parsimonious model avoids compromising the independence of separate data sets, making it preferable to alternative small-data solutions finding recourse in regularization via fully group-level estimation (i.e., concatenating data sets or averaging parameters) or the intermediate approaches of empirical priors and hierarchical Bayesian modeling across participants [13,29,79,[302][303][304][305]. From an idealized Bayesian-statistical perspective, compromising independence between individuals in this way mitigates putative measurement error from limited data.…”
Section: The Primacy Of Bias and Hysteresis As Well As Individual Dif...mentioning
confidence: 99%
“…To analyze learning computationally, model fitting was carried out independently for each participant using Bayesian parameter optimization methods, implemented in STAN 38 . We fitted a classic Q-learning model to choice data, based on the same principles as Hertz et al (2021), as follows:…”
Section: (E) Analysesmentioning
confidence: 99%