2015
DOI: 10.1214/15-aoas842
|View full text |Cite
|
Sign up to set email alerts
|

SLOPE—Adaptive variable selection via convex optimization

Abstract: We introduce a new estimator for the vector of coefficients β in the linear model y = Xβ + z, where X has dimensions n × p with p possibly larger than n. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to minb∈ℝp12‖y−Xb‖ℓ22+λ1false|bfalse|false(1false)+λ2false|bfalse|false(2false)+⋯+λpfalse|bfalse|false(pfalse),where λ1 ≥ λ2 ≥ … ≥ λp ≥ 0 and false|bfalse|false(1false)≥false|bfalse|false(2false)≥⋯≥false|bfalse|false(pfalse) are the decreasing absolute values of the entries of b. This is a c… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
364
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 239 publications
(370 citation statements)
references
References 55 publications
4
364
0
Order By: Relevance
“…In this study, we adopted the FastProxSL1 algorithm proposed in [21] to compute the prox and update the regularization parameters. For the experiments in Section 3, the step sizes t k 1∕0.9 ∧ k, the iteration ends when k 1000 or jX k1 − X k j < d, and the tolerance d 1e − 8, which satisfy the convergence condition of the proximal gradient method [27].…”
Section: Slope Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this study, we adopted the FastProxSL1 algorithm proposed in [21] to compute the prox and update the regularization parameters. For the experiments in Section 3, the step sizes t k 1∕0.9 ∧ k, the iteration ends when k 1000 or jX k1 − X k j < d, and the tolerance d 1e − 8, which satisfy the convergence condition of the proximal gradient method [27].…”
Section: Slope Methodsmentioning
confidence: 99%
“…However, the choice of regularization parameter is nontrivial. To achieve adaptivity and accuracy, we formulate the FMT reconstruction to a sorted L 1_norm minimization problem and introduce a sorted L-one penalized estimation (SLOPE) [21] method to solve it. Moreover, an iterative-shrinking permissible region (ISPR) strategy is also combined with SLOPE to improve the performance.…”
Section: Introductionmentioning
confidence: 99%
“…In high dimensional settings, statistical understanding of these algorithms is crucial not only to obtain quality solutions but also to invent new types of algorithms, as witnessed in recent literature [2,8,49,57]. Efficient and distributed algorithm implementations also become critical due to high computational demands.…”
Section: Challenges and Future Researchmentioning
confidence: 99%
“…In this paper, we propose a family of shrinkage variable selection operators by controlling the kth largest norm (KAN), a special case of SLOPE studied in Bogdan et al (2015) for the investigation of the false discovery rate. Different from the SLOPE, the proposed KAN method is designed to encourage some grouped variable selection without any group information.…”
Section: Zhao Et Al (2009) Andmentioning
confidence: 99%