2005
DOI: 10.1016/j.patrec.2005.03.028
|View full text |Cite
|
Sign up to set email alerts
|

Cost-conscious classifier ensembles

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2009
2009
2019
2019

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(23 citation statements)
references
References 9 publications
0
23
0
Order By: Relevance
“…Another possibility is to also include the other types of cost (e.g., delay cost (Sheng and Ling, 2006) and computational cost (Demir and Alpaydin, 2005)) into the problem formulation.…”
Section: Resultsmentioning
confidence: 99%
“…Another possibility is to also include the other types of cost (e.g., delay cost (Sheng and Ling, 2006) and computational cost (Demir and Alpaydin, 2005)) into the problem formulation.…”
Section: Resultsmentioning
confidence: 99%
“…The random subspace method [16] trains different experts with different subsets of a given feature set. Different representations of the same input make different characteristics explicit and therefore accuracy may be improved by combination [1,11]. It has also been proposed to generate an ensemble of fuzzy decision trees and take their combination for better accuracy [45].…”
Section: Introductionmentioning
confidence: 99%
“…That is, we are interested in both pruning the inaccurate ones and also to keep a check on complexity, we want to prune the redundant. ''Diversity'' measures have been proposed [23,22] and one possibility is to have an incremental, forward search where we add a classifier to an ensemble if it is diverse or adds to accuracy [9,11,35,49,42], or another possibility is to have a decremental, backward search where a classifier is removed or pruned if it is not diverse enough or if its removal does not increase error [30,27]. In this work, we propose an alternative method which combines base classifiers using principal component analysis (PCA) to get uncorrelated eigenclassifiers.…”
Section: Introductionmentioning
confidence: 99%
“…Subsequently, we train the estimators to learn these posteriors by using only the previously extracted features. Note that similar posterior probability estimations have been achieved by using linear perceptrons [4] and dynamic Bayesian networks [11].…”
Section: Methodsmentioning
confidence: 82%
“…Compared to the misclassification cost, the other types are much less studied. The cost of computation includes both static complexity, which arises from the size of a computer program [3], and dynamic complexity, which is incurred during training and testing a classifier [4]. The cost of feature extraction arises from the effort of acquiring a feature.…”
Section: Introductionmentioning
confidence: 99%