2019
DOI: 10.1371/journal.pone.0206711
|View full text |Cite
|
Sign up to set email alerts
|

On the prevalence of uninformative parameters in statistical models applying model selection in applied ecology

Abstract: Research in applied ecology provides scientific evidence to guide conservation policy and management. Applied ecology is becoming increasingly quantitative and model selection via information criteria has become a common statistical modeling approach. Unfortunately, parameters that contain little to no useful information are commonly presented and interpreted as important in applied ecology. I review the concept of an uninformative parameter in model selection using information criteria and perform a literatur… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
67
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 118 publications
(67 citation statements)
references
References 45 publications
0
67
0
Order By: Relevance
“…Some models showed strong signs of containing a 'pretending variable' (sensu Anderson 2007), otherwise known as an uninformative parameter (Leroux 2019). the log-likelihood) and increases the AIC value by approximately the penalty of two (Anderson 2007, Leroux 2019). the log-likelihood) and increases the AIC value by approximately the penalty of two (Anderson 2007, Leroux 2019).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Some models showed strong signs of containing a 'pretending variable' (sensu Anderson 2007), otherwise known as an uninformative parameter (Leroux 2019). the log-likelihood) and increases the AIC value by approximately the penalty of two (Anderson 2007, Leroux 2019). the log-likelihood) and increases the AIC value by approximately the penalty of two (Anderson 2007, Leroux 2019).…”
Section: Discussionmentioning
confidence: 99%
“…These variables can be identified when the addition of a variable to a simpler nested model does not improve model fit (i.e. In such cases, we excluded models containing a pretending variable, as recommended by Anderson (2007) and Leroux (2019). In such cases, we excluded models containing a pretending variable, as recommended by Anderson (2007) and Leroux (2019).…”
Section: Discussionmentioning
confidence: 99%
“…We demonstrate empirically what is often taken for granted in applied ecological analyses, that selecting a model based on out‐of‐sample prediction is different than balancing fit and complexity using only within‐sample data and pre‐specified penalty terms. Information criteria attempt to balance fit and complexity, but often select over‐parameterized models (Arnold , Barker and Link , Leroux ) and at best only approximate a model's predictive ability. Information criteria are often used for their computational simplicity, avoiding the need for a testing data set or the burden of cross‐validation, but potentially compromising the assessment of predictive ability by use of within‐sample approximation (Hooten and Hobbs ).…”
Section: Discussionmentioning
confidence: 99%
“…We created all simpler combinations of the most complex model and selected the best‐performing models using AICc (Burnham & Anderson ). We assessed whether high‐ranking models contained uninformative parameters, which are often present when comparing nested models, simply because the inclusion of an uninformative parameter receives a penalty of 2 AIC points (Anderson ; Leroux ). Uninformative parameters can be identified when their addition to a simpler nested model causes little improvement in the log‐likelihood and when confidence intervals for the parameter estimate span zero (Anderson ; Leroux ).…”
Section: Methodsmentioning
confidence: 99%
“…Uninformative parameters can be identified when their addition to a simpler nested model causes little improvement in the log‐likelihood and when confidence intervals for the parameter estimate span zero (Anderson ; Leroux ). In such cases, we omitted the model (Leroux ). We predicted abundance and standard errors for each of the 28 study sites, either from the best model when there was a clear winning model, or a model‐averaged prediction when competing models were within 7ΔAICc (Burnham et al ).…”
Section: Methodsmentioning
confidence: 99%