Ensemble Machine Learning 2012
DOI: 10.1007/978-1-4419-9326-7_2
|View full text |Cite
|
Sign up to set email alerts
|

Boosting Algorithms: A Review of Methods, Theory, and Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
95
0
2

Year Published

2014
2014
2022
2022

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 173 publications
(98 citation statements)
references
References 95 publications
1
95
0
2
Order By: Relevance
“…This list of AdaBoost modifications is not exhaustive, but it represents the most common algorithms used to date. For a more in-depth treatment on variants of boosting algorithms, the interested reader is referred to Ferreira and Figueiredo (2012). Long and Servedio (2010) proved that for any booster with a potential convex loss function, and any nonzero random classification noise rate, there is a data set, which can be efficiently learned by the booster if there is no noise, but cannot be learned with accuracy better than 1/2 with random classification noise present.…”
Section: Boosting and Classification Noisementioning
confidence: 99%
“…This list of AdaBoost modifications is not exhaustive, but it represents the most common algorithms used to date. For a more in-depth treatment on variants of boosting algorithms, the interested reader is referred to Ferreira and Figueiredo (2012). Long and Servedio (2010) proved that for any booster with a potential convex loss function, and any nonzero random classification noise rate, there is a data set, which can be efficiently learned by the booster if there is no noise, but cannot be learned with accuracy better than 1/2 with random classification noise present.…”
Section: Boosting and Classification Noisementioning
confidence: 99%
“…As many distinct boosting methods can be included within the meta-learner library (e.g., XGBoost), combining the power of multi-classifier boosting within a single base learner into the larger CBDA ensemble prediction enhances the method's power by aggregating across multiple base learners. Many studies examine the asymptotic convergence of bagging, boosting and ensemble methods [31][32][33][34]. Similar approaches may be employed to validate CBDA inference in terms of upper error bounds, convergence, and reliability.…”
Section: Cbda Two-phase Bootstrapping Strategymentioning
confidence: 99%
“…In this work, the classification is performed by gentle AdaBoost [66], which is stated to have better performance [67] compared to the original discrete [68] and real AdaBoost [69], due to less severe weighting of wrongly labeled or unrepresentative training samples.…”
Section: Pre-classificationmentioning
confidence: 99%