Proceedings of the Eighteenth Annual ACM Symposium on Theory of Computing - STOC '86 1986
DOI: 10.1145/12130.12158
|View full text |Cite
|
Sign up to set email alerts
|

Classifying learnable geometric concepts with the Vapnik-Chervonenkis dimension

Abstract: We extend Valiant's learnability model to learning classes of concepts defined by regions in Euclidean space E". Our methods lead to a unified treatment of some of Valiant's results, along with previous results of Pearl and Devroye and Wagner on distribution-free convergence of certain pattern recognition algorithms. We show that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
130
0
1

Year Published

1987
1987
2012
2012

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 124 publications
(131 citation statements)
references
References 15 publications
0
130
0
1
Order By: Relevance
“…14 It is worth recalling that the VC dimension of a class of visual concepts determines its learnability: the larger VCdim(C), the more training examples are needed to reduce the error in generalizing C to new instances below a given level (Blumer et al, 1986;Edelman, 1993). Because in real-life situations training data are always at a premium (Edelman, 2002), and because high-VCdim classifiers are too flexible and are therefore prone to overfitting (Baum and Haussler, 1989;Geman et al, 1992), it is necessary to break down the classification task into elements by relegating them to purposive visual subsystems.…”
Section: Representational Capacitymentioning
confidence: 99%
“…14 It is worth recalling that the VC dimension of a class of visual concepts determines its learnability: the larger VCdim(C), the more training examples are needed to reduce the error in generalizing C to new instances below a given level (Blumer et al, 1986;Edelman, 1993). Because in real-life situations training data are always at a premium (Edelman, 2002), and because high-VCdim classifiers are too flexible and are therefore prone to overfitting (Baum and Haussler, 1989;Geman et al, 1992), it is necessary to break down the classification task into elements by relegating them to purposive visual subsystems.…”
Section: Representational Capacitymentioning
confidence: 99%
“…We begin by extending the results of [Blumer et al 1986, Natarajan 1987 on the learnability o,f boolean-valued functions to the learnabality of general functions. To do so, we give a new and simple definition of the dimension of a family of functions and use it to prove a theorem identifying the most general class of function families that are learnable from polynomially many examples.…”
Section: Introductionmentioning
confidence: 95%
“…The recent interest in formal methods in machine learning started with the introduction of a formal framework for concept learning in [Valiant 1984]. Since then, the framework has been extended and analyzed by numerous authors [Blumer et al 1986, Natarajan 1987a, Kearns et al 1987. Unfortunately, the framework appears rather limited in scope and does not seem to capture the essence of many of the learning paradigms and architectures in use by the experimentalists.…”
Section: Introductionmentioning
confidence: 99%
“…Minimum disagreement strategies, in the noise-free PAC case, are always successful for classes of finite VC-dimension [3]. This result carries over to learning from random classification noise [1].…”
Section: Minimum Disagreement Strategiesmentioning
confidence: 84%