2018
DOI: 10.3758/s13423-018-1446-5
|View full text |Cite
|
Sign up to set email alerts
|

The relative merit of empirical priors in non-identifiable and sloppy models: Applications to models of learning and decision-making

Abstract: Formal modeling approaches to cognition provide a principled characterization of observed responses in terms of a set of postulated processes, specifically in terms of parameters that modulate the latter. These model-based characterizations are useful to the extent that there is a clear, one-to-one mapping between parameters and model expectations (identifiability) and that parameters can be recovered from reasonably sized data using a typical experimental design (recoverability). These properties are sometime… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
28
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
4
1

Relationship

4
6

Authors

Journals

citations
Cited by 39 publications
(30 citation statements)
references
References 67 publications
2
28
0
Order By: Relevance
“…Although we focused on test-retest reliability, there are many other properties worth exploring including parameter identifiability (e.g., Spektor & Kellen, 2018), parameter recovery (e.g., Ahn et al, 2011;Haines et al, 2018;Miletić, Turner et al, 2017), tests of selective influence (a form of construct validity where experimental manipulations cause expected changes in parameter values; Criss, 2010), and parameter convergence between behavioral models and models derived at other levels of analysis (e.g., with trait or neural models; Haines et al, 2020;Turner et al, 2017). Bayesian analysis facilitates joint estimation of all model parameters and their hypothesized relations, thus allowing for proper calibration of uncertainty in key parameters (e.g., test-retest reliability).…”
Section: Further Improvementsmentioning
confidence: 99%
“…Although we focused on test-retest reliability, there are many other properties worth exploring including parameter identifiability (e.g., Spektor & Kellen, 2018), parameter recovery (e.g., Ahn et al, 2011;Haines et al, 2018;Miletić, Turner et al, 2017), tests of selective influence (a form of construct validity where experimental manipulations cause expected changes in parameter values; Criss, 2010), and parameter convergence between behavioral models and models derived at other levels of analysis (e.g., with trait or neural models; Haines et al, 2020;Turner et al, 2017). Bayesian analysis facilitates joint estimation of all model parameters and their hypothesized relations, thus allowing for proper calibration of uncertainty in key parameters (e.g., test-retest reliability).…”
Section: Further Improvementsmentioning
confidence: 99%
“…To address question 2, we calculated the 95% HDIs of the posterior group-level parameter distributions. If the parameters are identifiable, the 95% HDI of should include the true data-generating parameter value (see also Spektor and Kellen, 2018). We observed good identifiability of the group-level parameters for both models (Figs.…”
Section: B2 Parameter and Model Recovery Analysesmentioning
confidence: 51%
“…The DPDM suffered from stronger sloppiness, especially between To address question 2, we calculated the 95% HDIs of the posterior group-level parameter distributions. If the parameters are identifiable, the 95% HDI of should include the true data-generating parameter value (see also Spektor & Kellen, 2018). We observed good identifiability of the group-level parameters for both models (Figs.…”
Section: B2 Parameter and Model Recovery Analysesmentioning
confidence: 51%