1989
DOI: 10.21236/ada217331
|View full text |Cite
|
Sign up to set email alerts
|

On Metric Entropy, Vapnik-Chervonenkis Dimension, and Learnability for a Class of Distributions

Abstract: In [23], Valiant proposed a formal framework for distribution-free concept learning which has generated a great deal of interest. A fundamental result regarding this framework was proved by Blumer et al. [6] characterizing those concept classes which are learnable in terms of their Vapnik-Chervonenkis (VC) dimension. More recently, Benedek and Itai [4] studied learnability with respect to a fixed probability distribution (a variant of the original distribution-free framework) and proved an analogous result cha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

1990
1990
1996
1996

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 28 publications
0
7
0
Order By: Relevance
“…Learnability for all distributions then simply imposes the uniform (upper and lower) bounds requiring the supremum over all distributions for both general (i.e., probabilistic) active learning algorithms and for deterministic algorithms. For the first part of the theorem, we need the following result relating the VC dimension of a concept class to its metric entropy: the VC dimension of C is finite iff supp N(e, C, P) < oo for all e > 0 (e.g., see [5] or [13] and references therein). The first part of the theorem follows immediately from this result.…”
Section: Theorem 2 C Is Actively Learnable For All Distributions Iff mentioning
confidence: 99%
See 2 more Smart Citations
“…Learnability for all distributions then simply imposes the uniform (upper and lower) bounds requiring the supremum over all distributions for both general (i.e., probabilistic) active learning algorithms and for deterministic algorithms. For the first part of the theorem, we need the following result relating the VC dimension of a concept class to its metric entropy: the VC dimension of C is finite iff supp N(e, C, P) < oo for all e > 0 (e.g., see [5] or [13] and references therein). The first part of the theorem follows immediately from this result.…”
Section: Theorem 2 C Is Actively Learnable For All Distributions Iff mentioning
confidence: 99%
“…The first term of the lower bound is from [13] and the second term of the lower bound follows from Lemma 2. The upper bound is from [10] which is a refinement of a result from [16] using techniques originally from [7].…”
Section: Proofmentioning
confidence: 99%
See 1 more Smart Citation
“…Comparing (8) and 7, we see that these two are of the same size when psdim(H) and BD are close to psdim(L H ) and , respectively. The leading constants in (8) are smaller by a factor of more than thirty than those in (7). As an example, when Y = 0; B] and L(z; y) = jz yj, we have that = B, D = 1, and psdim(H) psdim(L H ), som fce compares favorably with m femp .…”
Section: The Real-valued Distribution-free Casementioning
confidence: 90%
“…A considerable amount of work has been done along these lines. For example, learnability with respect to a class of distributions (as opposed to the original distribution-free framework) has been studied (Benedek & Itai, 1988;Kulkarni, 1989Kulkarni, , 1991Natarajan, 1988Natarajan, , 1989. Notably, Benedek and Itai (1988) first studied learnability with respect to a fixed and known probability distribution, and characterized learnability in this case in terms of the metric entropy of the concept class.…”
Section: Introductionmentioning
confidence: 99%