2014
DOI: 10.1093/biomet/ast055
|View full text |Cite
|
Sign up to set email alerts
|

Information criteria for variable selection under sparsity

Abstract: The optimization of an information criterion in a variable selection procedure leads to an additional bias, which can be substantial in sparse, high-dimensional data. The bias can be compensated by applying shrinkage while estimating within the selected models. This paper presents modified information criteria for use in variable selection and estimation without shrinkage. The analysis motivating the modified criteria follows two routes. The first, explored for signal-plus-noise observations only, goes by comp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
6
2
1

Relationship

4
5

Authors

Journals

citations
Cited by 14 publications
(25 citation statements)
references
References 30 publications
0
25
0
Order By: Relevance
“…Zou et al (2007), Zhang et al (2010), Jansen (2014) and Janson et al (2015) discussed model selection criteria or degrees of freedom of the estimated model via L 1 type regularization. Breheny and Huang (2009) have derived the AIC (Akaike, 1974) and BIC (Schwarz, 1978) type criteria by defining the degrees of freedom of the estimated model via the group bridge, but those criteria are not well considered in other penalties for bi-level selection.…”
Section: Discussionmentioning
confidence: 99%
“…Zou et al (2007), Zhang et al (2010), Jansen (2014) and Janson et al (2015) discussed model selection criteria or degrees of freedom of the estimated model via L 1 type regularization. Breheny and Huang (2009) have derived the AIC (Akaike, 1974) and BIC (Schwarz, 1978) type criteria by defining the degrees of freedom of the estimated model via the group bridge, but those criteria are not well considered in other penalties for bi-level selection.…”
Section: Discussionmentioning
confidence: 99%
“…Finding the best model for estimation without shrinkage is more delicate, as the prediction error for a misspecified l rapidly increases (Jansen, 2014). It thus determines the size of the selected model.…”
Section: The Sparse Variable Selection Problemmentioning
confidence: 99%
“…For forward selection and least angle regression in normal linear regression models, Taylor et al (2016) study selective hypothesis tests and confidence intervals. Jansen (2014) studied the effect of the optimization on the expected values of the Akaike information criterion and Mallow's C p in high-dimensional sparse models. Belloni et al (2015) obtained uniformly valid confidence intervals in the presence of a sparse high-dimensional nuisance parameter.…”
Section: Introductionmentioning
confidence: 99%