2009
DOI: 10.1007/s10994-009-5104-z
|View full text |Cite
|
Sign up to set email alerts
|

The generalization performance of ERM algorithm with strongly mixing observations

Abstract: The generalization performance is the main concern of machine learning theoretical research. The previous main bounds describing the generalization ability of the Empirical Risk Minimization (ERM) algorithm are based on independent and identically distributed (i.i.d.) samples. In order to study the generalization performance of the ERM algorithm with dependent observations, we first establish the exponential bound on the rate of relative uniform convergence of the ERM algorithm with exponentially strongly mixi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
36
0

Year Published

2009
2009
2015
2015

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 70 publications
(36 citation statements)
references
References 33 publications
0
36
0
Order By: Relevance
“…Extension, if possible, to non-standard setups-setups where either data are not IID or the minimizing objective is not the expected loss-often requires specifically tailored analysis (e.g. Gamarnik 2003;Lozano et al 2006;Zou et al 2009). In contrast, extension of robustness-based analysis to non-standard setups is straightforward.…”
Section: Introductionmentioning
confidence: 99%
“…Extension, if possible, to non-standard setups-setups where either data are not IID or the minimizing objective is not the expected loss-often requires specifically tailored analysis (e.g. Gamarnik 2003;Lozano et al 2006;Zou et al 2009). In contrast, extension of robustness-based analysis to non-standard setups is straightforward.…”
Section: Introductionmentioning
confidence: 99%
“…That is, we estimate the quantity (for any ε > 0) (11) by following the enlightening idea of [13,20]. Such a study is motivated by the observation that a uniform convergence bound fails to capture the phenomenon that for those functions f ∈ F for which the expected risk E (f ) is small, the deviation E(f )−E n (f ) is also small with large probability.…”
Section: Relative Uniform Convergence Boundmentioning
confidence: 99%
“…We denote by I 1 the quantity on the right-hand side of inequality (15), and by I 2 the quantity on the left-hand side of inequality (15). Then, take δ = ε E (f j ) , suppose that 0 < ε ≤ 2 3 (aL), and use the similar method of [13], we have…”
Section: Relative Uniform Convergence Boundmentioning
confidence: 99%
See 1 more Smart Citation
“…Smale and Zhou [14] considered online learning algorithm based on Markov sampling. Zou et al [26] established the bounds on the generalization performance of the ERM algorithm with strongly mixing observations. Xu and Chen [22] considered the learning rates of regularized regression algorithm with strongly mixing sequences.…”
Section: Introductionmentioning
confidence: 99%