2006
DOI: 10.1007/11776420_29
|View full text |Cite
|
Sign up to set email alerts
|

Aggregation and Sparsity Via ℓ1 Penalized Least Squares

Abstract: Abstract. This paper shows that near optimal rates of aggregation and adaptation to unknown sparsity can be simultaneously achieved via 1 penalized least squares in a nonparametric regression setting. The main tool is a novel oracle inequality on the sum between the empirical squared loss of the penalized least squares estimate and a term reflecting the sparsity of the unknown regression function.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
78
1

Year Published

2006
2006
2013
2013

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 61 publications
(80 citation statements)
references
References 15 publications
1
78
1
Order By: Relevance
“…For a discussion of the concept of SOI we refer to (Tsybakov 2006). Examples of SOI are proved in (Koltchinskii 2006;Bunea et al 2006Bunea et al , 2007aBunea et al , 2007bvan de Geer 2006;Bickel et al 2007) for the Lasso, BIC and Dantzig selector aggregates. Note that the SOI for the Lasso and Dantzig selector are not as strong as those for the BIC: they fail to guarantee optimal rates for MS, linear and convex aggregation unless φ 1 , .…”
Section: ))mentioning
confidence: 99%
“…For a discussion of the concept of SOI we refer to (Tsybakov 2006). Examples of SOI are proved in (Koltchinskii 2006;Bunea et al 2006Bunea et al , 2007aBunea et al , 2007bvan de Geer 2006;Bickel et al 2007) for the Lasso, BIC and Dantzig selector aggregates. Note that the SOI for the Lasso and Dantzig selector are not as strong as those for the BIC: they fail to guarantee optimal rates for MS, linear and convex aggregation unless φ 1 , .…”
Section: ))mentioning
confidence: 99%
“…Several papers have begun to investigate estimation of hdsms, primarily focusing on mean regression with the ℓ 1 -norm acting as a penalty function [4,6,7,8,9,17,22,28,31,33]. The results in [4,6,7,8,17,22,31,33] demonstrated the fundamental result that ℓ 1 -penalized least squares estimators achieve the rate s/n √ log p, which is very close to the oracle rate s/n achievable when the true model is known.…”
Section: Introductionmentioning
confidence: 94%
“…Readers are referred to Shao (1997) for more discussion on this issue. When p is allowed to increase with n, Bunea et al (2006) show that consistent variable selection can also be achieved via multiple testing. Much more general choices of k involving types of cross validation are given later in this section.…”
Section: Regressionmentioning
confidence: 99%
“…Note that, in fact, (2) can be proved for other procedures: a first example is given in Bunea et al (2005Bunea et al ( , 2006 where (2) is established for a Lasso typef n in the regression model with squared loss.…”
Section: Mannor Et Al (2003) Lugosi and Vayatismentioning
confidence: 99%