2015
DOI: 10.3758/s13428-015-0672-2
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Bayesian estimation and hypothesis testing for delay discounting tasks

Abstract: A state-of-the-art data analysis procedure is presented to conduct hierarchical Bayesian inference and hypothesis testing on delay discounting data. The delay discounting task is a key experimental paradigm used across a wide range of disciplines from economics, cognitive science, and neuroscience, all of which seek to understand how humans or animals trade off the immediacy verses the magnitude of a reward. Bayesian estimation allows rich inferences to be drawn, along with measures of confidence, based upon l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
60
0
1

Year Published

2016
2016
2019
2019

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 53 publications
(63 citation statements)
references
References 42 publications
2
60
0
1
Order By: Relevance
“…The approach of relying on previous parameter inferences to determine priors for related models is becoming more frequent in cognitive modeling. Some recent examples include Gu et al (2016) in psychophysics, Gershman (2016) in reinforcement learning, Vincent (2016) in the context of temporal discounting, Wiehler et al (2015) for different clinical sub-populations in the context of gambling, and Donkin et al (2015) in the context of a visual working memory model. In an interesting application of the latter model, Kary et al (2015) used vague priors for key parameters, and used the data from the first half of their participants to derive the posterior distributions.…”
Section: Previous Data and Modelingmentioning
confidence: 99%
“…The approach of relying on previous parameter inferences to determine priors for related models is becoming more frequent in cognitive modeling. Some recent examples include Gu et al (2016) in psychophysics, Gershman (2016) in reinforcement learning, Vincent (2016) in the context of temporal discounting, Wiehler et al (2015) for different clinical sub-populations in the context of gambling, and Donkin et al (2015) in the context of a visual working memory model. In an interesting application of the latter model, Kary et al (2015) used vague priors for key parameters, and used the data from the first half of their participants to derive the posterior distributions.…”
Section: Previous Data and Modelingmentioning
confidence: 99%
“…We The discount factors were derived from the fully endogenized rational inattention model (R2) after fitting the free parameters to data from [14]. also compared against standard quasi-hyperbolic discounting (QH; [21]), and several variations of hyperbolic discounting, including the basic functional form (H0), and generalized versions that incorporate magnitude-dependent discounting and choice stochasticity (H1-H3; [22]). The fully endogenized rational model decisively won the model comparison, with a protected exceedance probability (PXP) greater than 0.99.…”
Section: Applications To Prior Experimental Resultsmentioning
confidence: 99%
“…where α is the inverse temperature and ω is a lapse probability, capturing occasional random responses (see also [22]). All subsequent models share the same choice probability function.…”
Section: Model-fitting and Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…Forward Model Model Inversion are for a single class of modeling such as sequential sampling models (Matzke et al, 2013;Singmann et al, 2016;Vincent, 2015;Wabersich & Vandekerckhove, 2014;Wiecki, Sofer, & Frank, 2013). An exception is the Variational Bayesian Analysis (VBA) MATLAB toolbox (Daunizeau, Adam, & Rigoux, 2014), which allows users to fit and compare various models with variational Bayesian algorithms.…”
Section: Model Inversionmentioning
confidence: 99%