2010
DOI: 10.1007/978-3-642-14980-1_63
|View full text |Cite
|
Sign up to set email alerts
|

A Modular Approach to Training Cascades of Boosted Ensembles

Abstract: Building on the ideas of Viola-Jones [1] we present a framework for training cascades of boosted ensembles (CoBE) which introduces further modularity and tractability to the training process. It addresses the challenges faced by CoBE frameworks such as protracted runtimes, slow layer convergences and classifier optimization. The framework possesses the ability to bootstrap positive samples and may in turn be extended into the domain of incremental learning. This paper aims to address our framework's susceptibi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2010
2010
2011
2011

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…The comparisons were conducted between our algorithm and the same underlying classifier that was trained on stationary data without concept learning capabilities. The classifier in question was first trained offline using the [29] method with Haar-like features. Five thousand positive faces from a combination of FERET 1 and Yale Face Database B [31] data sets were used, against a α α α α Fig.…”
Section: Experiments Designmentioning
confidence: 99%
See 1 more Smart Citation
“…The comparisons were conducted between our algorithm and the same underlying classifier that was trained on stationary data without concept learning capabilities. The classifier in question was first trained offline using the [29] method with Haar-like features. Five thousand positive faces from a combination of FERET 1 and Yale Face Database B [31] data sets were used, against a α α α α Fig.…”
Section: Experiments Designmentioning
confidence: 99%
“…The idea of a dual-cascaded structure was taken further in Susnjak et al [29], in order to implement positive sample bootstrapping. This enabled the utilization of potentially massive positive data sets, without the learning algorithm being exposed to all samples explicitly, a task that would otherwise be computationally too expensive.…”
Section: Building the Static Cascaded Ensemblementioning
confidence: 99%