2018
DOI: 10.1214/17-aos1670
|View full text |Cite
|
Sign up to set email alerts
|

Slope meets Lasso: Improved oracle bounds and optimality

Abstract: We show that two polynomial time methods, a Lasso estimator with adaptively chosen tuning parameter and a Slope estimator, adaptively achieve the minimax prediction and ℓ 2 estimation rate (s/n) log(p/s) in high-dimensional linear regression on the class of s-sparse vectors in R p . This is done under the Restricted Eigenvalue (RE) condition for the Lasso and under a slightly more constraining assumption on the design for the Slope. The main results have the form of sharp oracle inequalities accounting for the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
184
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 126 publications
(190 citation statements)
references
References 22 publications
6
184
0
Order By: Relevance
“…Finally, we show that solution to (6) and (7) is unique. From the proof of Lemma 24, equations (6), (7) correspond to the first order optimality condition of min σ≥0 max θ≥0 Ψ(σ, θ) and statonary point (σ * , θ * ) always exists.…”
Section: ) Proof Of Theoremmentioning
confidence: 81%
See 1 more Smart Citation
“…Finally, we show that solution to (6) and (7) is unique. From the proof of Lemma 24, equations (6), (7) correspond to the first order optimality condition of min σ≥0 max θ≥0 Ψ(σ, θ) and statonary point (σ * , θ * ) always exists.…”
Section: ) Proof Of Theoremmentioning
confidence: 81%
“…It is not easy to handle them simultaneously. A simplification we can make is to first set τ to 1 and find the minimum σ such that the first equation (6) and the inequality E B,Z [η (B + σZ; F y , F λ )] ≤ δ hold. Once we get σ min and optimal λ * , the corresponding τ min can then be obtained via (7): τ min = (1 − 1 δ E B,Z [η * (B + σ min Z; F y , F * λ )]) −1 and λ * is in turn updated to be λ * /τ min .…”
Section: F Proof Of Propositionmentioning
confidence: 99%
“…One example of such an order-optimal estimator is the SLOPE estimator which was recently analyzed [55]. If one is willing to tolerate a slightly slower rate of O σ 2 · k ln d n for signal recovery, several other estimators can be used.…”
Section: Side Informationmentioning
confidence: 99%
“…This paper is motivated by the SLOPE method [21] for its use of the SL1 regularization, where it brings many benefits not available with the popular 1 -based regularization-the capability of false discovery rate (FDR) control [21,22], adaptivity to unknown signal sparsity [23] and clustering of coefficients [24,25]. Also, efficient optimization methods [13,21,26] and more theoretical analysis [23,[27][28][29] are under active research.…”
Section: Lasso and Slopementioning
confidence: 99%