2008
DOI: 10.1214/009053607000000929
|View full text |Cite
|
Sign up to set email alerts
|

High-dimensional generalized linear models and the lasso

Abstract: We consider high-dimensional generalized linear models with Lipschitz loss functions, and prove a nonasymptotic oracle inequality for the empirical risk minimizer with Lasso penalty. The penalty is based on the coefficients in the linear predictor, after normalization with the empirical norm. The examples include logistic regression, density estimation and classification with hinge loss. Least squares regression is also discussed.Comment: Published in at http://dx.doi.org/10.1214/009053607000000929 the Annal… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

17
459
0

Year Published

2008
2008
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 521 publications
(476 citation statements)
references
References 23 publications
17
459
0
Order By: Relevance
“…Motivated by many practical prediction problems, including those that arise in microarray data analysis and natural language processing, this problem has been extensively studied in recent years. The results can be divided into two categories: those that study the predictive power ofβ [9,30,12] and those that study its sparsity pattern and reconstruction properties [4,32,18,19,17,8]; this article falls into the first of these categories.…”
Section: Introductionmentioning
confidence: 99%
“…Motivated by many practical prediction problems, including those that arise in microarray data analysis and natural language processing, this problem has been extensively studied in recent years. The results can be divided into two categories: those that study the predictive power ofβ [9,30,12] and those that study its sparsity pattern and reconstruction properties [4,32,18,19,17,8]; this article falls into the first of these categories.…”
Section: Introductionmentioning
confidence: 99%
“…[2,6,10,13,20,19] demonstrated the fundamental result that ℓ 1 -penalized least squares estimators achieve the rate s/n √ log p, which is very close to the oracle rate s/n achievable when the true model is known. [17] demonstrated a similar fundamental result on the excess forecasting error loss under both quadratic and non-quadratic loss functions. Thus the estimator can be consistent and can have excellent forecasting performance even under very rapid, nearly exponential growth of the total number of regressors p. [1] investigated the ℓ 1 -penalized quantile regression process, obtaining similar results.…”
Section: Introductionmentioning
confidence: 57%
“…Several papers have begun to investigate estimation of HDSMs, primarily focusing on penalized mean regression, with the ℓ 1 -norm acting as a penalty function [2,6,10,13,17,20,19]. [2,6,10,13,20,19] demonstrated the fundamental result that ℓ 1 -penalized least squares estimators achieve the rate s/n √ log p, which is very close to the oracle rate s/n achievable when the true model is known.…”
Section: Introductionmentioning
confidence: 99%
“…In addition to linear regression models, the idea of the penalized regressions has been broadly applied to various statistical models and problems; generalized linear models (Van de Geer, 2008), Cox proportional hazard models (Fan and Li, 2002), Gaussian graphical models (Friedman et al, 2013), principal component analysis (Park, 2013) and high-dimensional clustering problems (Kwon et al, 2013).…”
Section: Introductionmentioning
confidence: 99%