2009
DOI: 10.1016/j.jco.2009.01.002
|View full text |Cite
|
Sign up to set email alerts
|

Elastic-net regularization in learning theory

Abstract: a b s t r a c tWithin the framework of statistical learning theory we analyze in detail the so-called elastic-net regularization scheme proposed by Zou and Hastie [H. Zou, T. Hastie, Regularization and variable selection via the elastic net, J. R. Stat. Soc. Ser. B, 67(2) (2005) 301-320] for the selection of groups of correlated variables. To investigate the statistical properties of this scheme and in particular its consistency properties, we set up a suitable mathematical framework. Our setting is random-des… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
164
0
2

Year Published

2010
2010
2020
2020

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 228 publications
(168 citation statements)
references
References 41 publications
2
164
0
2
Order By: Relevance
“…Most results are stated under the standard uniform boundedness assumption for the output that for some constant M > 0, |y| ≤ M almost surely. This standard assumption is abandoned in [5,9,22]. In [5,9], it is assumed that the output satisfies the condition…”
Section: Introduction and Main Resultsmentioning
confidence: 99%
“…Most results are stated under the standard uniform boundedness assumption for the output that for some constant M > 0, |y| ≤ M almost surely. This standard assumption is abandoned in [5,9,22]. In [5,9], it is assumed that the output satisfies the condition…”
Section: Introduction and Main Resultsmentioning
confidence: 99%
“…In fact, since the model imposes both 1 and 2 norms on the feature vector , it resembles the elastic net regularization [13], which has an advantage of achieving higher stability with respect to random sampling [14].…”
Section: Proposed Model Formulationmentioning
confidence: 99%
“…This latter case is of interest for estimators in RKHS since estimates in various other norms, e.g., uniform norm or Sobolev norms, can be easily obtainedsee [35]. Also, it is of interest in sparse learning-see [13] and references thereinwhere one is interested in estimating the coefficients obtained expanding f H on a given dictionary.…”
Section: Some Background On Supervised Learningmentioning
confidence: 99%
“…Example 4 (Elastic-Net Regularization) The elastic-net algorithm proposed in [45], is studied in [13] in the context of learning with an infinite dimensional overcomplete dictionary of features (ψ γ ) γ ∈Γ . In this case, we let 2 (Γ ) be the space of β = (β γ ) γ ∈Γ such that γ ∈Γ |β γ | 2 < ∞ and look for an estimator of the form γ ∈Γ β γ ψ γ .…”
Section: Theorem 2 If Assumption 1 Holds and Moreover And There Arementioning
confidence: 99%
See 1 more Smart Citation