2008
DOI: 10.1016/j.patcog.2007.10.015
|View full text |Cite
|
Sign up to set email alerts
|

From dynamic classifier selection to dynamic ensemble selection

Abstract: In handwritten pattern recognition, the multiple classifier system has been shown to be useful for improving recognition rates. One of the most important tasks in optimizing a multiple classifier system is to select a group of adequate classifiers, known as an Ensemble of Classifiers (EoC), from a pool of classifiers. Static selection schemes select an EoC for all test patterns, and dynamic selection schemes select different classifiers for different test patterns. Nevertheless, it has been shown that traditio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
235
0
2

Year Published

2014
2014
2022
2022

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 368 publications
(237 citation statements)
references
References 21 publications
0
235
0
2
Order By: Relevance
“…For the calculation of the competence, various performance estimates are used, such as local accuracy estimation (Didaci et al, 2005), the Bayes confidence measure (Huenupán et al, 2008), multiple classifier behaviour (Giacinto and Roli, 2001), the oracle based measure (Ko et al, 2008), methods based on relating that of the classifier with the response obtained by random guessing (Woloszynski et al, 2012) or the randomized classification model (Woloszynski and Kurzynski, 2011), among others.…”
Section: Introductionmentioning
confidence: 99%
“…For the calculation of the competence, various performance estimates are used, such as local accuracy estimation (Didaci et al, 2005), the Bayes confidence measure (Huenupán et al, 2008), multiple classifier behaviour (Giacinto and Roli, 2001), the oracle based measure (Ko et al, 2008), methods based on relating that of the classifier with the response obtained by random guessing (Woloszynski et al, 2012) or the randomized classification model (Woloszynski and Kurzynski, 2011), among others.…”
Section: Introductionmentioning
confidence: 99%
“…There are three main ways that classifiers can be selected in order to optimise the overall ensemble accuracy [5].…”
Section: A Ensemble Selection Strategiesmentioning
confidence: 99%
“…2) Local k-NN Ensemble Selection: Ko et al [5] suggest two methods for DES. KNORA-ELIMINATE considers the nearest K neighbours to x and uses classifiers that give a correct prediction for the K neighbouring points.…”
Section: ) Finding the Optimal Ensemblementioning
confidence: 99%
“…fausse si b = 0) simultanément. Différente stratégie ont été proposées pour sélectionner les classificateurs à inclure dans l'ensemble (Kim et Oh 2008, Ko et al 2008, Tsoumakas et al 2009, Yang 2011, Guo et Boukir 2013, Soto et al 2013. La stratégie utilisée ici est une évolution de l'algorithme « Selection by Accuracy and Diversity (SAD) » proposé par Yang (2011 Si tous les classificateurs ont été introduit dans l'ensemble comparer tous les ensembles classificateurs construit et sélectionner le meilleur, sinon, répéter les étapes 4 à 6.…”
Section: Ensemble Classificateursunclassified