2014
DOI: 10.1007/978-3-319-11656-3_1
|View full text |Cite
|
Sign up to set email alerts
|

Large Margin Distribution Learning

Abstract: Abstract. Support vector machines (SVMs) and Boosting are possibly the two most popular learning approaches during the past two decades. It is well known that the margin is a fundamental issue of SVMs, whereas recently the margin theory for Boosting has been defended, establishing a connection between these two mainstream approaches. The recent theoretical results disclosed that the margin distribution rather than a single margin is really crucial for the generalization performance, and suggested to optimize t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
27
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 36 publications
(27 citation statements)
references
References 17 publications
0
27
0
Order By: Relevance
“…However, recent theoretical results disclosed that the margin distribution, rather than the minimum margin, was more crucial to the generalization performance [41]. Such a conjecture has been recently proved by Gao and Zhou [16].…”
Section: Formulationmentioning
confidence: 96%
“…However, recent theoretical results disclosed that the margin distribution, rather than the minimum margin, was more crucial to the generalization performance [41]. Such a conjecture has been recently proved by Gao and Zhou [16].…”
Section: Formulationmentioning
confidence: 96%
“…According to Occam's razor [11], when a classifier was trained too complex, the performance of it would be even worse rather than better. This phenomena is called overfitting, which means that the trained model is so adaptable to the training data that it would exaggerate the slight fluctuations in the training data, leading to poor generalization performance [4], [12]. However, the AdaBoost algorithm has the superiority of resisting overfitting, which has been observed by many researches [13]- [15].…”
Section: Introductionmentioning
confidence: 99%
“…[4], [20]- [26]. However, the biggest problem of the statistical view is that these algorithms do not explain well why AdaBoost is resistant to overfitting [4], [12].…”
Section: Introductionmentioning
confidence: 99%
“…Several approaches have been proposed to control the margin distribution in SVM-like classifiers [17][18][19][20][21][22]. In [18], a maximum average margin classifier (MAMC) is proposed, in which instead of maximizing the minimum margin, the margin mean for the training data is maximized without slack variables.…”
Section: Introductionmentioning
confidence: 99%
“…In [18], a maximum average margin classifier (MAMC) is proposed, in which instead of maximizing the minimum margin, the margin mean for the training data is maximized without slack variables. In [21,22], in addition to maximizing the margin mean, the margin variance is minimized and the classifier is called large margin distribution machine (LDM).…”
Section: Introductionmentioning
confidence: 99%