2018
DOI: 10.1101/449751
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Sensitivity and Specificity of Information Criteria

Abstract: Latent class analysis • Likelihood ratio testing • Model selection Key Points:• Information criteria such as AIC and BIC are motivated by different theoretical frameworks.• However, when comparing pairs of nested models, they reduce algebraically to likelihood ratio tests with differing alpha levels.• This perspective makes it easier to understand their different emphases on sensitivity versus specificity, and why BIC but not AIC possesses model selection consistency.• This perspective is useful for comparison… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
144
0
1

Year Published

2018
2018
2021
2021

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 143 publications
(148 citation statements)
references
References 127 publications
3
144
0
1
Order By: Relevance
“…the average marginal effect), those effects were negligible in comparison with completely spurious effects arising from random allocation of species to islands (Tables S4.6–S4.8 in Appendix S4). As discussed in a recent study by Dziak, Coffman, Lanza, Li, and Jermiin () there is also a tendency of AIC to prioritize the avoidance of “false negatives” with respect to variable selection, and we deem it equally plausible that the apparent contradiction can be attributed to AIC model “overfitting”.…”
Section: Discussionmentioning
confidence: 75%
“…the average marginal effect), those effects were negligible in comparison with completely spurious effects arising from random allocation of species to islands (Tables S4.6–S4.8 in Appendix S4). As discussed in a recent study by Dziak, Coffman, Lanza, Li, and Jermiin () there is also a tendency of AIC to prioritize the avoidance of “false negatives” with respect to variable selection, and we deem it equally plausible that the apparent contradiction can be attributed to AIC model “overfitting”.…”
Section: Discussionmentioning
confidence: 75%
“…All models were reduced using a bidirectional stepwise selection of confounders determined by the Akaike information criterion in an attempt to identify a model best balancing accuracy and simplicity, while maintaining a low false negative rate . The vital sign deviation under analysis was forced to remain in the list of final confounders.…”
Section: Methodsmentioning
confidence: 99%
“…Attention to type I error is purportedly emphasized over attention to type II error in classic hypothesis testing – although BIC generally has far lower rates of type I error than FSTs – because the former constitutes an incorrect statement, while the latter is merely a ‘failure to reject’ (Dziak et al . ). De‐emphasis of type II error, however, may result in underfitting and loss of predictive power.…”
Section: Discussionmentioning
confidence: 97%
“…The ordering of demarcation lines is in agreement with Dziak et al . () who previously defined AIC and BIC as methods that emphasize sensitivity (statistical power) and specificity (avoidance of type I error) in null hypothesis tests, respectively.…”
Section: A Graphical Heuristicmentioning
confidence: 99%