2015
DOI: 10.1016/j.neucom.2014.07.062
|View full text |Cite
|
Sign up to set email alerts
|

Learning a hyperplane classifier by minimizing an exact bound on the VC dimension1

Abstract: The VC dimension measures the complexity of a learning machine, and a low VC dimension leads to good generalization. While SVMs produce state-of-the-art learning performance, it is well known that the VC dimension of a SVM can be unbounded; despite good results in practice, there is no guarantee of good generalization. In this paper, we show how to learn a hyperplane classifier by minimizing an exact, or Θ bound on its VC dimension. The proposed approach, termed as the Minimal Complexity Machine (MCM), involve… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 38 publications
(6 citation statements)
references
References 30 publications
0
6
0
Order By: Relevance
“…Table 1 summarizes information about the number of samples and features of each dataset. [5] for the sake of brevity; an added reason is that a fair comparison would be between two methods that use a fuzzy methodology.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Table 1 summarizes information about the number of samples and features of each dataset. [5] for the sake of brevity; an added reason is that a fair comparison would be between two methods that use a fuzzy methodology.…”
Section: Resultsmentioning
confidence: 99%
“…However, according to Burges [4], SVMs can have a very large VC dimension, and that "at present there exists no theory which shows that good generalization performance is guaranteed for SVMs". In recent work [5], we have shown how to learn a bounded margin hyperplane classifier, termed as the Minimal Complexity Machine (MCM) by minimizing an exact bound on its VC dimension.…”
Section: Introductionmentioning
confidence: 99%
“…In contrast, MCM guarantees good generalization accuracy by obtaining a better bound (lower and upper) over the VC dimension while also achieving excellent training error rates. In addition as noted in [12], the number of support vectors obtained by MCM are comparatively less than that of SVM. This makes MCM suitable for complex classification tasks while also providing opportunity to reduce the overall overhead during classification.…”
Section: Svm Vs MCMmentioning
confidence: 89%
“…For the other key operations in object classification, Support Vector Machines(SVM) [6] have been the traditional choice. Recently, Minimal Complexity Machine (MCM) [12] has shown to outperform SVM in terms accuracy, computational complexity as well as sparse representation of the features. The strongest argument in favor of MCM is its provably good generalization accuracy and requirement of far less number of support vectors as compared to SVMs.…”
Section: Introductionmentioning
confidence: 99%
“…The notion of an exact bound means that the objective being minimized bounds the VC dimension from both above and below; this means that the two are close to each other. The theory that allows us to do so is motivated by the recently proposed Minimal Complexity Machine (MCM) [14,15]. The MCM shows that it is possible to learn a hyperplane classifier by minimizing an exact bound on the VC dimension.…”
Section: Introductionmentioning
confidence: 99%