2003
DOI: 10.1007/3-540-36434-x_4
|View full text |Cite
|
Sign up to set email alerts
|

An Introduction to Boosting and Leveraging

Abstract: Abstract. We provide an introduction to theoretical and practical aspects of Boosting and Ensemble learning, providing a useful reference for researchers in the field of Boosting as well as for those seeking to enter this fascinating area of research. We begin with a short background concerning the necessary learning theoretical foundations of weak learners and their linear combinations. We then point out the useful connection between Boosting and the Theory of Optimization, which facilitates the understanding… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
197
0
3

Year Published

2005
2005
2016
2016

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 281 publications
(201 citation statements)
references
References 148 publications
1
197
0
3
Order By: Relevance
“…For an overview see for example [FS99,Sch03,MR03]. The first boosting algorithm was used for showing the equivalence between weak learnability and strong learnability [Sch90].…”
Section: Introductionmentioning
confidence: 99%
“…For an overview see for example [FS99,Sch03,MR03]. The first boosting algorithm was used for showing the equivalence between weak learnability and strong learnability [Sch90].…”
Section: Introductionmentioning
confidence: 99%
“…One of the first model assembly systems was bagging, proposed by Breiman (1996) and Buhlmann and Yu (2002) and implemented in R by Spanish researchers, Alfaro et al (2013). Among the resampling techniques, we can found boosting, in particular, the algorithm AdaBoost M1, introduced by Freund and Schapire (1997) and extensively assessed in many studies and analyses, most notably by Eibl and Pfeiffer (2002) and Meir and Rätsch (2003). Random forests is another resampling method developed by Breiman (2001) as a variant of the bagging methodology using decision trees.…”
Section: Methodologies To Improve the Resultsmentioning
confidence: 99%
“…Breiman's arc-gv is quite similar to AdaBoost (in fact the pseudocodes differ by only one line), though AdaBoost has been found to exhibit interesting dynamical behavior that may sometimes resemble chaos, or may sometimes converge to provably stable cycles [13] (one can now imagine why AdaBoost is difficult to analyze) whereas arc-gv converges very nicely. See [17] or [7] for an introduction to boosting.…”
Section: Introductionmentioning
confidence: 99%