2019
DOI: 10.1214/18-aos1741
|View full text |Cite
|
Sign up to set email alerts
|

Perturbation bootstrap in adaptive Lasso

Abstract: The Adaptive Lasso(Alasso) was proposed by Zou [J. Amer. Statist. Assoc. 101 (2006) 1418-1429 as a modification of the Lasso for the purpose of simultaneous variable selection and estimation of the parameters in a linear regression model. Zou (2006) established that the Alasso estimator is variable-selection consistent as well as asymptotically Normal in the indices corresponding to the nonzero regression coefficients in certain fixed-dimensional settings. In an influential paper, Minnier, Tian and Cai [J. Am… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
16
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 18 publications
(17 citation statements)
references
References 30 publications
1
16
0
Order By: Relevance
“…Remark Even though the adaptive elastic net estimators of the nonzero coefficients are asymptotically Normal, some empirical work suggests that convergence is quite slow and that Wald‐type confidence intervals for the coefficients in β0S0 will have poor coverage. For example, it was demonstrated in Das et al () that the values of λ which ensure good variable selection (larger values of λ) tend to result in subnominal coverage of Wald‐type intervals; smaller values of λ, under which variable selection performance is poor, result in closer‐to‐nominal coverage of Wald‐type intervals. We emphasize that it is the variable selection result, statement false(1false) of Theorem , which is of primary interest in regularized regression modeling, while the asymptotic Normality result, statement false(2false) of Theorem , comes as a theoretical byproduct, and we do not recommend using it to conduct Wald‐type inference.…”
Section: Technical Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Remark Even though the adaptive elastic net estimators of the nonzero coefficients are asymptotically Normal, some empirical work suggests that convergence is quite slow and that Wald‐type confidence intervals for the coefficients in β0S0 will have poor coverage. For example, it was demonstrated in Das et al () that the values of λ which ensure good variable selection (larger values of λ) tend to result in subnominal coverage of Wald‐type intervals; smaller values of λ, under which variable selection performance is poor, result in closer‐to‐nominal coverage of Wald‐type intervals. We emphasize that it is the variable selection result, statement false(1false) of Theorem , which is of primary interest in regularized regression modeling, while the asymptotic Normality result, statement false(2false) of Theorem , comes as a theoretical byproduct, and we do not recommend using it to conduct Wald‐type inference.…”
Section: Technical Resultsmentioning
confidence: 99%
“…Zou () introduced the adaptive lasso and proved an asymptotic Normality result for the estimators of the nonzero regression coefficients to which statement false(2false) of Theorem is analogous; in spite of this result, which would seem to suggest the good performance of Wald‐type intervals, the construction of post‐regularization confidence intervals has remained an area of much on‐going research. For linear regression, Geer et al () and Zhang and Zhang () introduced the desparsified lasso and Das et al () introduced a perturbation bootstrap for the adaptive lasso; investigating adaptations of these methods to group testing could be an area of future research.…”
Section: Technical Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The perturbation‐based bootstrap (Das et al . 2019) procedure for penalized regression methods would be interesting to explore in such situations. For distribution‐free predictive inference, split conformal prediction bands (Lei et al .…”
Section: Discussionmentioning
confidence: 99%
“…Work on the perturbation bootstrap in the linear regression setup is limited. Some work has been carried out by Chatterjee Das et al (2017). As a variable selection procedure, Tibshirani (1996) introduced the Lasso.…”
Section: Introductionmentioning
confidence: 99%