2018
DOI: 10.48550/arxiv.1805.05133
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Model selection with lasso-zero: adding straw to the haystack to better find needles

Abstract: The high-dimensional linear model y = Xβ 0 + is considered and the focus is put on the problem of recovering the support S 0 of the sparse vector β 0 . We introduce Lasso-Zero, a new 1 -based estimator whose novelty resides in an "overfit, then threshold" paradigm and the use of noise dictionaries concatenated to X for overfitting the response. To select the threshold, we employ the quantile universal threshold based on a pivotal statistic that requires neither knowledge nor preliminary estimation of the noise… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
11
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(12 citation statements)
references
References 37 publications
1
11
0
Order By: Relevance
“…BP estimator) in the full null model when β = 0 [18]. For the BP estimator, Descloux and Sardy [11] suggest the threshold τ fn α defined as the 1 − α quantile of max β fn 1 , . .…”
Section: Selection Of the Thresholdmentioning
confidence: 99%
See 2 more Smart Citations
“…BP estimator) in the full null model when β = 0 [18]. For the BP estimator, Descloux and Sardy [11] suggest the threshold τ fn α defined as the 1 − α quantile of max β fn 1 , . .…”
Section: Selection Of the Thresholdmentioning
confidence: 99%
“…When entries of X are i.i.d N (0, 1), the optimal value of λ selected by AMP theory provides a thresholded LASSO for which the derived sign estimator is the best one to recover S(β). One may notice that the threshold selection provided in Descloux and Sardy [11] does not allow to recover S(β) with a large probability when β has lot of large components (intuitively when β is far from 0). Instead, our heuristic application of the knockoff methodology allows for almost perfect control of FWER at level 0.05.…”
Section: Numerical Comparisonsmentioning
confidence: 99%
See 1 more Smart Citation
“…In practice too, there is reason to consider alternatives to CV-based hyperparameter selection in sparse regression: sparse estimators are unstable, and selecting only one estimator can result in arbitrarily ignoring certain variables among a correlated group with similar predictive power [37]. For the Lasso, these difficulties have motivated researchers to introduce several aggregation schemes, such as the Bolasso [3], stability selection [19], the lasso-zero [9] and the random lasso [34], which are shown to have some better properties than the standard Lasso.…”
Section: Introductionmentioning
confidence: 99%
“…
We propose Robust Lasso-Zero, an extension of the Lasso-Zero methodology [Descloux and Sardy, 2018], initially introduced for sparse linear models, to the sparse corruptions problem. We give theoretical guarantees on the sign recovery of the parameters for a slightly simplified version of the estimator, called Thresholded Justice Pursuit.
…”
mentioning
confidence: 99%