2013
DOI: 10.1111/insr.12023
|View full text |Cite
|
Sign up to set email alerts
|

A Survey ofL1Regression

Abstract: SummaryL1 regularization, or regularization with an L1 penalty, is a popular idea in statistics and machine learning. This paper reviews the concept and application of L1 regularization for regression. It is not our aim to present a comprehensive list of the utilities of the L1 penalty in the regression setting. Rather, we focus on what we believe is the set of most representative uses of this regularization technique, which we describe in some detail. Thus, we deal with a number of L1‐regularized methods for … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
53
0
1

Year Published

2015
2015
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 104 publications
(54 citation statements)
references
References 106 publications
(162 reference statements)
0
53
0
1
Order By: Relevance
“…for k  = 1 the L 1 prior is a Laplacian function, and for k  = 2 the L 2 prior is a Gaussian function. Using the definition ||r||k:=true(i|ri|ktrue)1/k of a L k -norm, we derive properties of L k for ranges of k+, similar to Vidaurre et al (2013). L 0 is the apparent choice for parameter selection due to its direct penalization of the number of ri0.…”
Section: Problem Statementmentioning
confidence: 99%
“…for k  = 1 the L 1 prior is a Laplacian function, and for k  = 2 the L 2 prior is a Gaussian function. Using the definition ||r||k:=true(i|ri|ktrue)1/k of a L k -norm, we derive properties of L k for ranges of k+, similar to Vidaurre et al (2013). L 0 is the apparent choice for parameter selection due to its direct penalization of the number of ri0.…”
Section: Problem Statementmentioning
confidence: 99%
“…Neuronal classification could benefit tremendously from the growing machine learning toolset for metadata integration, including generalized linear models [100] such as logistic regression . Matrix eQTL [101] provides a user-friendly R package to test some of these linear models.…”
Section: Challenges and Opportunitiesmentioning
confidence: 99%
“…Here 2 is the true error variance, p 0 is the number of non-zero slope coefficients and k is a constant, that does not depend on n. Apart from the log(p) term and the constant, this equals the loss expected if an oracle told us the true set of predictors and we fit least squares, so in this sense, this log(p) factor is the (relatively small) price that is paid to gain the wide applicability of the lasso (and in particular the ability to select out a large number of unneeded predictors). See, for example, Bühlmann (2013), Candès and Plan (2009) and Vidaurre et al (2013). Unfortunately, these inequalities do not correspond to practical implementation of the lasso.…”
Section: Deterioration Of the Lassomentioning
confidence: 99%