2015
DOI: 10.1587/transinf.2015edp7069
|View full text |Cite
|
Sign up to set email alerts
|

Penalized AdaBoost: Improving the Generalization Error of Gentle AdaBoost through a Margin Distribution

Abstract: SUMMARYGentle AdaBoost is widely used in object detection and pattern recognition due to its efficiency and stability. To focus on instances with small margins, Gentle AdaBoost assigns larger weights to these instances during the training. However, misclassification of small-margin instances can still occur, which will cause the weights of these instances to become larger and larger. Eventually, several large-weight instances might dominate the whole data distribution, encouraging Gentle AdaBoost to choose wea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 23 publications
0
4
0
Order By: Relevance
“…A logical continuation of this work is quantizing other variants of AdaBoost which depend on domain partitioning hypotheses such as GentleBoost [5], ModestBoost [41], Parameterized AdaBoost [42], and Penalized AdaBoost [43]. Each variant has different generalization abilities, which make them useful in different contexts.…”
Section: Discussionmentioning
confidence: 99%
“…A logical continuation of this work is quantizing other variants of AdaBoost which depend on domain partitioning hypotheses such as GentleBoost [5], ModestBoost [41], Parameterized AdaBoost [42], and Penalized AdaBoost [43]. Each variant has different generalization abilities, which make them useful in different contexts.…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, it improves the thresholding of Marginpruning Boost in Step (2)(e). The parameter in (18) is assigned to [15]. Then we will analyze the generalization abilities of the six variants in the next section.…”
Section: Penalized Adaboostmentioning
confidence: 99%
“…However, its performance is unstable because the accuracy drops occasionally. For the same purpose, Wu and Nagahashi devised Margin-pruning Boost [14] and Penalized AdaBoost [15]. Margin-pruning Boost applies a weight reinitialization approach to reduce the influence from noise-like data, while Penalized AdaBoost improves Margin-pruning Boost by introducing an adaptive weight resetting policy.…”
Section: Introductionmentioning
confidence: 99%
“…The word "additive" doesn't indicate a model fit which is additive in the covariates, but points to the fact that boosting is an additive (in fact, a linear) combination of "simple" (function) estimators. Also, Ratliff et al [9] and Wu et al [10] established related ideas that were mostly acknowledged in the machine learning community. In [11], further views on boosting are given; specifically, the authors first pointed out the relation between boosting and L 1 -penalized estimation.…”
Section: Introductionmentioning
confidence: 99%