2019
DOI: 10.48550/arxiv.1907.03025
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Improving Lasso for model selection and prediction

Abstract: It is known that the Thresholded Lasso (TL), SCAD or MCP correct intrinsic estimation bias of the Lasso. In this paper we propose an alternative method of improving the Lasso for predictive models with general convex loss functions which encompass normal linear models, logistic regression, quantile regression or support vector machines. For a given penalty we order the absolute values of the Lasso non-zero coefficients and then select the final model from a small nested family by the Generalized Information Cr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 29 publications
0
4
0
Order By: Relevance
“…It is well known that LASSO can consistently estimate 𝛽 under much weaker assumptions than the irrepresentability condition (see, e.g., Meinshausen & Yu, 2009) or (Van De Geer and Bühlmann 2009). This suggests that an appropriately thresholded version of LASSO can recover S(𝛽) under weaker assumptions than the irrepresentability condition (Pokarowski et al, 2019).…”
Section: Sign Recovery By Thresholded Lassomentioning
confidence: 99%
“…It is well known that LASSO can consistently estimate 𝛽 under much weaker assumptions than the irrepresentability condition (see, e.g., Meinshausen & Yu, 2009) or (Van De Geer and Bühlmann 2009). This suggests that an appropriately thresholded version of LASSO can recover S(𝛽) under weaker assumptions than the irrepresentability condition (Pokarowski et al, 2019).…”
Section: Sign Recovery By Thresholded Lassomentioning
confidence: 99%
“…To ease the readability of the numerical results, we also provide in Appendix C.2 additional comparisons to other estimators: the thresholded Robust Lasso proposed in Nguyen and Tran (2013a) and the thresholded lasso in Pokarowski et al (2019). Rlass0 still remains competitive in terms of sign recovery, in particular in difficult cases, that is, when the percentage of missing values increases, when the missing data are informative and when the covariates are correlated.…”
Section: Estimators Consideredmentioning
confidence: 99%
“…For a non-oracle hyperparameter tuning, we compare the Robust Lasso-Zero with the thresholded Robust Lasso proposed in Nguyen and Tran (2013a) and the thresholded lasso in Pokarowski et al (2019).…”
Section: C2 Comparison Of the Robust Lasso-zero With Other Estimatorsmentioning
confidence: 99%
See 1 more Smart Citation