2017
DOI: 10.3150/15-bej756
|View full text |Cite
|
Sign up to set email alerts
|

On the prediction performance of the Lasso

Abstract: Although the Lasso has been extensively studied, the relationship between its prediction performance and the correlations of the covariates is not fully understood. In this paper, we give new insights into this relationship in the context of multiple linear regression. We show, in particular, that the incorporation of a simple correlation measure into the tuning parameter can lead to a nearly optimal prediction performance of the Lasso even for highly correlated covariates. However, we also reveal that for mod… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
166
1

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 118 publications
(170 citation statements)
references
References 38 publications
3
166
1
Order By: Relevance
“…Recent work of Hebiri and Lederer (2013), Dalalyan et al (2014), and others has examined in some detail how correlations among the columns x ·j of the design matrix X affect the performance of the LASSO and the selection of the penalty parameter λ. The broad conclusion of this work is that smaller values of λ are appropriate when the columns of X are more (positively) correlated, a relationship that holds for the permutation selection procedure.…”
Section: Some Connections With Universal Thresholdsmentioning
confidence: 99%
“…Recent work of Hebiri and Lederer (2013), Dalalyan et al (2014), and others has examined in some detail how correlations among the columns x ·j of the design matrix X affect the performance of the LASSO and the selection of the penalty parameter λ. The broad conclusion of this work is that smaller values of λ are appropriate when the columns of X are more (positively) correlated, a relationship that holds for the permutation selection procedure.…”
Section: Some Connections With Universal Thresholdsmentioning
confidence: 99%
“…In the context of regression, this is known as the lasso (least absolute shrinkage and selector operator) method (Dalalyan et al, 2017), whereas the extension to multivariate settings is called the graphical lasso (glasso) (Friedman, Hastie, & Tibshirani, 2008). Importantly, the glasso method was primarily developed to overcome challenges in high-dimensional settings, in which the number of variables (p) often exceeds the number of observations (n) (Fan et al, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…The default approach for estimating network models in psychology uses 1 regularization (e.g., a form of penalized maximum likelihood) (Epskamp & Fried, 2016), which can simultaneously improve predictive accuracy and perform variable selection by reducing some parameters to exactly zero (Dalalyan, Hebiri, & Lederer, 2017). In the context of regression, this is known as the lasso (least absolute shrinkage and selector operator) method (Dalalyan et al, 2017), whereas the extension to multivariate settings is called the graphical lasso (glasso) (Friedman, Hastie, & Tibshirani, 2008).…”
Section: Introductionmentioning
confidence: 99%
“…Ortelli and van de Geer (2019a)) to prove oracle inequalities. However, these studies were confined to restrictive graph structures: the path in Dalalyan, Hebiri and Lederer (2017) and a class of tree graphs in Ortelli and van de Geer (2018). Other studies focusing on the fused lasso and not directly involving its synthesis form also implicitly relied on some kind of dictionary to handle the error term by projections onto some columns of this dictionary, see for instance the lower interpolant by Lin et al (2017).…”
Section: Total Variation Regularized Estimatorsmentioning
confidence: 99%