2007
DOI: 10.1214/07-ejs008
|View full text |Cite
|
Sign up to set email alerts
|

Sparsity oracle inequalities for the Lasso

Abstract: This paper studies oracle properties of $\ell_1$-penalized least squares in nonparametric regression setting with random design. We show that the penalized least squares estimator satisfies sparsity oracle inequalities, i.e., bounds in terms of the number of non-zero components of the oracle vector. The results are valid even when the dimension of the model is (much) larger than the sample size and the regression matrix is not positive definite. They can be applied to high-dimensional linear regression, to non… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

7
420
0

Year Published

2010
2010
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 357 publications
(427 citation statements)
references
References 26 publications
7
420
0
Order By: Relevance
“…Motivated by many practical prediction problems, including those that arise in microarray data analysis and natural language processing, this problem has been extensively studied in recent years. The results can be divided into two categories: those that study the predictive power ofβ [9,30,12] and those that study its sparsity pattern and reconstruction properties [4,32,18,19,17,8]; this article falls into the first of these categories.…”
Section: Introductionmentioning
confidence: 99%
“…Motivated by many practical prediction problems, including those that arise in microarray data analysis and natural language processing, this problem has been extensively studied in recent years. The results can be divided into two categories: those that study the predictive power ofβ [9,30,12] and those that study its sparsity pattern and reconstruction properties [4,32,18,19,17,8]; this article falls into the first of these categories.…”
Section: Introductionmentioning
confidence: 99%
“…The standard choice of λ employs 5) where A ≥ 1 is a constant that does not depend on X, chosen so that (2.4) holds no matter what X is. Note that √ n S ∞ is a maximum of N (0, σ 2 ) variables, which are correlated if columns of X are correlated, as they typically are in the sample.…”
Section: Lasso As a Benchmark In Parametric And Nonparametric Modelsmentioning
confidence: 99%
“…Thus the estimator can be consistent and can have excellent forecasting performance even under very rapid, nearly exponential growth of the total number of regressors p. [1] investigated the ℓ 1 -penalized quantile regression process, obtaining similar results. See [9,2,3,4,5,11,12,15] for many other interesting developments and a detailed review of the existing literature.…”
Section: Introductionmentioning
confidence: 99%
“…Theoretical properties of the lasso and related methods for high dimensional data have been examined by Fan and Peng (2004), Bunea et al (2007), Candès and Tao (2007), Huang et al (2008a,b), Kim et al (2008), Bickel et al (2009) and Meinshausen and Yu (2009), among many others. Most of the references consider quadratic objective functions and linear or nonparametric models with an additive mean 0 error.…”
Section: Introductionmentioning
confidence: 99%