2017
DOI: 10.1037/rev0000057
|View full text |Cite
|
Sign up to set email alerts
|

Model flexibility analysis does not measure the persuasiveness of a fit.

Abstract: Recently, Veksler, Myers, and Gluck (2015) proposed model flexibility analysis as a method that "aids model evaluation by providing a metric for gauging the persuasiveness of a given fit" (p. 755) Model flexibility analysis measures the complexity of a model in terms of the proportion of all possible data patterns it can predict. We show that this measure does not provide a reliable way to gauge complexity, which prevents model flexibility analysis from fulfilling either of the 2 aims outlined by Veksler et al… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
22
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
7
2

Relationship

6
3

Authors

Journals

citations
Cited by 20 publications
(22 citation statements)
references
References 24 publications
0
22
0
Order By: Relevance
“…Although the results have shown clear and convincing evidence that the MLBA provides a superior account of the compromise and similarity effects (for these data) compared to MDFT, the LCA, and the AAM, one could potentially argue that this superior fit is a result of its additional complexity. As noted in the model descriptions, the MLBA and the AAM contain nine parameters in comparison to the MDFT and LCA's eight parameters, which by classic standards suggests that the MLBA may be a more complex model (though this does not necessitate that it is more functionally complex, see Myung, 2000;Myung & Pitt, 1997;Evans, Howard, Heathcote, & Brown, 2017b;. In order to address this potential issue, we assessed the predictive ability of the models to unseen data, using a version of the generalization criterion (Busemeyer & Wang, 2000).…”
Section: Generalization Criterion Analysismentioning
confidence: 99%
“…Although the results have shown clear and convincing evidence that the MLBA provides a superior account of the compromise and similarity effects (for these data) compared to MDFT, the LCA, and the AAM, one could potentially argue that this superior fit is a result of its additional complexity. As noted in the model descriptions, the MLBA and the AAM contain nine parameters in comparison to the MDFT and LCA's eight parameters, which by classic standards suggests that the MLBA may be a more complex model (though this does not necessitate that it is more functionally complex, see Myung, 2000;Myung & Pitt, 1997;Evans, Howard, Heathcote, & Brown, 2017b;. In order to address this potential issue, we assessed the predictive ability of the models to unseen data, using a version of the generalization criterion (Busemeyer & Wang, 2000).…”
Section: Generalization Criterion Analysismentioning
confidence: 99%
“…Basic approaches to this issue, such as AIC and BIC, balance goodness-of-fit against model flexibility measured by the number of estimated parameters (e.g., Akaike, 1974;Schwarz, 1978;Burnham & Anderson, 2004). However, these basic approaches fail to take account of differences in complexity due to differences in functional form, which occur because the increase in flexibility endowed by adding a parameter depends on the mathematical form of a parametric model (Myung, 2000;Myung & Pitt, 1997;Evans & Brown, 2017a;Evans, Howard, Heathcote, & Brown, 2017). In the domain of forgetting curves, for example, Averell and Heathcote (2011) found that the addition of an asymptote parameter made exponential functions were more flexible than power functions, whereas the opposite held with no asymptote.…”
Section: Hierarchical Bayesian Estimation and Model Flexibilitymentioning
confidence: 99%
“…These assessments are usually made through quantitative model selection methods, which penalize models based on either their a-priori flexibility (e.g., Kass & Raftery, 1995;Myung et al, 2006;Annis et al, 2019;Gronau et al, 2017;Schwarz, 1978) or their over-fitting to the noise in samples of data (e.g., Spiegelhalter et al, 2002;Vehtari et al, 2017;Browne, 2000;Akaike, 1974). Importantly, models that are more flexible a-priori will have an unfair advantage in accurately explaining the data than simpler models (Roberts & Pashler, 2000;Myung & Pitt, 1997;Evans, Howard, et al, 2017), and models that over-fit to a sample of data will predict future data more poorly than those that only capture the robust trends (Myung, 2000). Although model comparison is less similar to confirmatory experimental research than model application, model comparison still typically involves confirmatory research questions about which models will be superior to others, making it well suited to preregistration.…”
Section: Introductionmentioning
confidence: 99%