1996
DOI: 10.1006/jcss.1996.0033
|View full text |Cite
|
Sign up to set email alerts
|

Fat-Shattering and the Learnability of Real-Valued Functions

Abstract: We consider the problem of learning real-valued functions from random examples when the function values are corrupted with noise. With mild conditions on independent observation noise, we provide characterizations of the learnability of a real-valued function class in terms of a generalization of the Vapnik Chervonenkis dimension, the fat-shattering function, introduced by Kearns and Schapire. We show that, given some restrictions on the noise, a function class is learnable in our model if an only if its fat-s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
72
0

Year Published

1996
1996
2018
2018

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 77 publications
(73 citation statements)
references
References 18 publications
1
72
0
Order By: Relevance
“…This means that the band-dimension is not simply one number depending on H, but is, rather, a function depending on H. (A number of such scale-sensitive dimensions have proven to be useful in learning theory [16,1,9,23,24].) Let H be a set of real-valued functions.…”
Section: Measures Of Dimension and The Main Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…This means that the band-dimension is not simply one number depending on H, but is, rather, a function depending on H. (A number of such scale-sensitive dimensions have proven to be useful in learning theory [16,1,9,23,24].) Let H be a set of real-valued functions.…”
Section: Measures Of Dimension and The Main Resultsmentioning
confidence: 99%
“…This dimension was introduced by Kearns and Schapire [16] in their work on the learnability of p-concepts. Here, we use the notation and terminology of [9]. Suppose that H is a set of functions from X to [0, 1] and that γ > 0.…”
Section: The Restricted Problem: C = Hmentioning
confidence: 99%
See 1 more Smart Citation
“…However, in all these cases one obtains covering number bounds which behave like O( −D ) for some generalized dimension D. If this is the case, replace the pseudo-dimension by the dimension D, and all the results below follow. Note, however, that in some cases D may depend on , and a more careful analysis is needed using the so-called fat-shattering dimension as is discussed in Bartlett et al (1996). Furthermore, there are specific situations where the pseudo-dimension yields nearly optimal bounds for the estimation error, since in that case the pseudo-dimension is essentially equivalent to another combinatorial dimension, called the fat-shattering dimension, which gives nearly matching lower bounds on estimation error (see Bartlett et al (1996)).…”
Section: Definition 4 An Estimatorf Dnn ∈ F Dn Is Weakly Consistementioning
confidence: 99%
“…We begin with the definition of the fat shattering dimension, which was first introduced in Kearns and Schapire (1990), and has been used for several problems in learning since (Alon et al, 1997;Bartlett, Long, & Williamson, 1996;Anthony & Bartlett, 1994;Bartlett & Long, 1995).…”
Section: Theoretical Analysis Of Generalizationmentioning
confidence: 99%