Proceedings of the 2008 ACM Symposium on Applied Computing 2008
DOI: 10.1145/1363686.1364109
|View full text |Cite
|
Sign up to set email alerts
|

Empirical evaluation of a new structure for AdaBoost

Abstract: We propose a mixed structure to form cascades for AdaBoost classifiers, where parallel strong classifiers are trained for each layer. The structure allows for rapid training and guarantees high hit rates without changing the original threshold. We implemented and tested the approach for two datasets from UCI [1], and compared results of binary classifiers using three different structures: standard AdaBoost, a cascade classifier with threshold adjustments, and the proposed structure.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2009
2009
2014
2014

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 4 publications
0
5
0
Order By: Relevance
“…The algorithm is formulated on the ideas of boosted cascades in the form of Barczak et al [27]. The method constructs clusters of ensembles within each layer that function as nested cascades.…”
Section: Our Contributionmentioning
confidence: 99%
See 3 more Smart Citations
“…The algorithm is formulated on the ideas of boosted cascades in the form of Barczak et al [27]. The method constructs clusters of ensembles within each layer that function as nested cascades.…”
Section: Our Contributionmentioning
confidence: 99%
“…Barczak et al [27] initially proposed a classifier training structure termed Parallel Strong Classifier within the same Layer (PSL), seen in Fig. 1.…”
Section: Building the Static Cascaded Ensemblementioning
confidence: 99%
See 2 more Smart Citations
“…The PSL (Parallel Strong classifier within the same Layer ) training framework introduced by Barczak et al [11] originally sought to address the convergence bottleneck during the training of cascade layers. However, the modularity of the approach also simplified cascade optimization.…”
Section: Introductionmentioning
confidence: 99%