2010
DOI: 10.1007/s10994-010-5173-z
|View full text |Cite
|
Sign up to set email alerts
|

On the equivalence of weak learnability and linear separability: new relaxations and efficient boosting algorithms

Abstract: Boosting algorithms build highly accurate prediction mechanisms from a collection of lowaccuracy predictors. To do so, they employ the notion of weak-learnability. The starting point of this paper is a proof which shows that weak learnability is equivalent to linear separability with ℓ 1 margin. While this equivalence is a direct consequence of von Neumann's minimax theorem, we derive the equivalence directly using Fenchel duality. We then use our derivation to describe a family of relaxations to the weak-lear… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
17
0

Year Published

2010
2010
2023
2023

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 29 publications
(18 citation statements)
references
References 21 publications
(21 reference statements)
1
17
0
Order By: Relevance
“…The proof uses ideas from convex analysis. We refer the reader to [3,23] and see also a similar derivation in [26]. The definition ofL implies that it is the infimal convolution of L and the quadratic function (β/2)v 2 .…”
Section: A3 Proof Of Theorem 24mentioning
confidence: 99%
“…The proof uses ideas from convex analysis. We refer the reader to [3,23] and see also a similar derivation in [26]. The definition ofL implies that it is the infimal convolution of L and the quadratic function (β/2)v 2 .…”
Section: A3 Proof Of Theorem 24mentioning
confidence: 99%
“…As before, we will choose a linear functionL t ≤ L in each round and squash φ t towards it to obtain the new auxiliary function. Therefore (14) continues to hold, and we can again inductively prove that φ t continues to retain an elliptical quadratic form:…”
Section: Boom: a Fusionmentioning
confidence: 90%
“…However, parallel boosting algorithms on their own are too slow. See for instance [14] for a primal-dual analysis of the rate of convergence of boosting algorithms in the context of loss minimization.…”
Section: Introductionmentioning
confidence: 99%
“…n ′ is the regularization parameter which denotes the level of relaxation of the margin constraint. Shalev-Shwartz and Singer [16] proved that optimizing the above linear programming problem is equivalent to maximizing the averaged n ′ smallest margins. Thus, the case where n ′ equals 1 denotes maximizing the minimum margin, the case where n ′ equal n means maximizing the averaged margin over all labeled examples, and n ′ lies in the range from 1 to n.…”
Section: A Soft Margins and Lpboostmentioning
confidence: 99%
“…SSLPBoost actually maximizes a combination of the averaged n ′ smallest margins over the labeled data and averaged m ′ smallest margins over the unlabeled data. ShalevShwartz [16] showed the use of soft margins with supervised learning. Here, we only consider the margins for the unlabeled data.…”
Section: B Semi-supervised Linear Programming Boostingmentioning
confidence: 99%