2011
DOI: 10.1214/10-aos827
|View full text |Cite
|
Sign up to set email alerts
|

ℓ1-penalized quantile regression in high-dimensional sparse models

Abstract: We consider median regression and, more generally, a possibly infinite collection of quantile regressions in high-dimensional sparse models. In these models the number of regressors p is very large, possibly larger than the sample size n, but only at most s regressors have a non-zero impact on each conditional quantile of the response variable, where s grows more slowly than n. Since ordinary quan-tile regression is not consistent in this case, we consider ℓ1-penalized quantile regression (ℓ1-QR), which penali… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

9
548
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
7
3

Relationship

1
9

Authors

Journals

citations
Cited by 495 publications
(557 citation statements)
references
References 27 publications
9
548
0
Order By: Relevance
“…In this particular experiment, we also test the post-LASSO estimation as in Belloni and Chernozhukov (2011). This means that we run the QMLE procedure on the model selected by the LASSO method, which is the true model.…”
Section: A Simulation Experiment: Multidimensional Casementioning
confidence: 99%
“…In this particular experiment, we also test the post-LASSO estimation as in Belloni and Chernozhukov (2011). This means that we run the QMLE procedure on the model selected by the LASSO method, which is the true model.…”
Section: A Simulation Experiment: Multidimensional Casementioning
confidence: 99%
“…It is worth noting that in the proof of lemma 1 of [23], the authors did not prove the convergence in the last step is uniform, hence the proof is incomplete. In a recent paper [2], the quantile regression model was considered and L 1 penalized method was proposed. Properties of the estimator were presented under restricted eigenvalue type conditions and smooth assumptions on the density function of the noise.…”
Section: Introductionmentioning
confidence: 99%
“…L 1 -penalty is considered to nullify "excessive" coefficients (Belloni and Chernozhukov [2011]). Simple lasso-penalized QR optimisation problem is:…”
Section: Appendixmentioning
confidence: 99%