2016
DOI: 10.1111/2041-210x.12648
|View full text |Cite
|
Sign up to set email alerts
|

A graphical framework for model selection criteria and significance tests: refutation, confirmation and ecology

Abstract: Summary In this study, we use a novel graphical heuristic to compare the way four methods: significance testing, two popular information‐theoretic approaches (AIC and BIC) and Good's Bayes/non‐Bayes compromise (an underutilized hypothesis testing approach whose demarcation criterion adjusts for n), evaluate the merit of competing hypotheses, for example H0 and HA. A primary goal of our work is to clarify the concept of strong consistency in model selection. Explicit considerations of this principle (includin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
20
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 22 publications
(20 citation statements)
references
References 72 publications
(107 reference statements)
0
20
0
Order By: Relevance
“…While AIC c is recommended to be used as a standard approach (Burnham and Anderson ), more recently BIC has been advocated when comparing models to test hypotheses (i.e., linked to our treatments) and when models are based on controlled experiments (Aho et al. , ), as we did in current study. Differences in AIC c or BIC between two models larger than 2 can be seen as more support for the model with the lowest value.…”
Section: Methodsmentioning
confidence: 94%
“…While AIC c is recommended to be used as a standard approach (Burnham and Anderson ), more recently BIC has been advocated when comparing models to test hypotheses (i.e., linked to our treatments) and when models are based on controlled experiments (Aho et al. , ), as we did in current study. Differences in AIC c or BIC between two models larger than 2 can be seen as more support for the model with the lowest value.…”
Section: Methodsmentioning
confidence: 94%
“…Additionally, BIC is a stringent method for carrying out confirmatory hypothesis‐testing and is therefore appropriate for testing the various comparisons listed in Table (Aho et al . ). The best‐supported model was identified as either (1) the top‐ranked model with a BIC at least −2 below that of the second‐ranked model ( M 2 ) (Burnham & Anderson ), or (2) the most parsimonious of those models within 2 BIC units of the top‐ranked model (Burnham & Anderson ), as inclusion of additional variables or sub‐group levels did not result in a better‐performing model.…”
Section: Methodsmentioning
confidence: 97%
“…We used the MuMIn package in R (Barton, ) to select the top model from among all possible model combinations, with summer and winter ranges evaluated independently (Barton, ). All models within 0–2 AICc units of the top model were evaluated for the presence of uninformative parameters (Arnold, ); a parameter was considered uninformative if the parameter improved model performance by a negligible amount (ΔAIC < 2; Arnold, ; Aho, Derryberry, & Peterson, ). We assessed model performance and predictive skill using cross‐validation (see below).…”
Section: Methodsmentioning
confidence: 99%