High Dimensional Probability II 2000
DOI: 10.1007/978-1-4612-1358-1_29
|View full text |Cite
|
Sign up to set email alerts
|

Rademacher Processes and Bounding the Risk of Function Learning

Abstract: We construct data dependent upper bounds on the risk in function learning problems. The bounds are based on the local norms of the Rademacher process indexed by the underlying function class and they do not require prior knowledge about the distribution of training examples or any specific properties of the function class. Using Talagrand's type concentration inequalities for empirical and Rademacher processes, we show that the bounds hold with high probability that decreases exponentially fast when the sample… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

8
230
1
1

Year Published

2001
2001
2017
2017

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 195 publications
(240 citation statements)
references
References 12 publications
8
230
1
1
Order By: Relevance
“…Following the ideas initially introduced by Koltchinskii and Panchenko (1999), Bartlett et al (2005) and Bartlett et al (2004) propose some localized versions of Rademacher averages as tight data-dependent measures of complexity. Recently, it has been proved that these localized Rademacher averages can be used to construct margin-adaptive model selection procedures (see Boucheron et al, 2005, for a brief survey, or Koltchinskii, 2003, for a more complete study).…”
Section: Resultsmentioning
confidence: 99%
“…Following the ideas initially introduced by Koltchinskii and Panchenko (1999), Bartlett et al (2005) and Bartlett et al (2004) propose some localized versions of Rademacher averages as tight data-dependent measures of complexity. Recently, it has been proved that these localized Rademacher averages can be used to construct margin-adaptive model selection procedures (see Boucheron et al, 2005, for a brief survey, or Koltchinskii, 2003, for a more complete study).…”
Section: Resultsmentioning
confidence: 99%
“…In particular, the method in [16] requires a polynomial decay of the regularization error D(λ) = O(λ β ) with some 0 < β ≤ 1. Similar ideas of norm reduction also appear in [13] for the purpose of bounding the risk of function learning.…”
Section: M δ) §5 Strong Estimates By Iterationmentioning
confidence: 99%
“…Subsequently, several researchers have proposed related disagreement-based algorithms with improved sample complexity, e.g. [8,11,5].…”
Section: Disagreement-based Active Learningmentioning
confidence: 99%