2007
DOI: 10.1090/conm/443/08559
|View full text |Cite
|
Sign up to set email alerts
|

Precise statements of convergence for AdaBoost and arc-gv

Abstract: We wish to dedicate this paper to Leo Breiman.Abstract. We present two main results, the first concerning Freund and Schapire's AdaBoost algorithm, and the second concerning Breiman's arc-gv algorithm. Our discussion of AdaBoost revolves around a circumstance called the case of "bounded edges", in which AdaBoost's convergence properties can be completely understood. Specifically, our first main result is that if AdaBoost's "edge" values fall into a small interval, a corresponding interval can be found for the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2007
2007
2018
2018

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…Sections 8, 9 and 10 contain proofs from Sections 3, 5, 6 and 7. Preliminary and less detailed statements of these results appear in [25,26].…”
Section: Convergence Properties Of New and Old Algorithmsmentioning
confidence: 94%
“…Sections 8, 9 and 10 contain proofs from Sections 3, 5, 6 and 7. Preliminary and less detailed statements of these results appear in [25,26].…”
Section: Convergence Properties Of New and Old Algorithmsmentioning
confidence: 94%
“…The adversarial multi-armed bandit problem can be treated within the class of Exponentially Weighted Average Forecaster algorithms [17]. Typically these algorithms maintain a probability distribution over the arms and draws a random arm from this distribution at each step.…”
Section: Exp4: Combination Of Expertsmentioning
confidence: 99%
“…We first specify a reward function for each information source. Let = ∑ ℎ = ~ ℎ be the edge [17] of the base hypothesis ℎ chosen by the base learner at time step $t$. Here the edge helps define reward functions in the proposed algorithm.…”
Section: Eboost: Combining Adaboost and Exp4mentioning
confidence: 99%
“…But it still has its downsides, as Breiman's quote above indicates. There is evidence both for and against the power of the margin theory to predict the quality of the generalization performance [Breiman, 1999, Rudin et al, 2004, 2007a,b, Reyzin and Schapire, 2006. But the most striking problem is that the margin bound is very loose: It does not explain the precise behavior of the error.…”
Section: The Margin Theorymentioning
confidence: 99%