2010
DOI: 10.1016/j.patcog.2009.06.013
|View full text |Cite
|
Sign up to set email alerts
|

Combining predictions in pairwise classification: An optimal adaptive voting strategy and its relation to weighted voting

Abstract: Weighted voting is the commonly used strategy for combining predictions in pairwise classification. Even though it shows good classification performance in practice, it is often criticized for lacking a sound theoretical justification. In this paper, we study the problem of combining predictions within a formal framework of label ranking and, under some model assumptions, derive a generalized voting strategy in which predictions are properly adapted according to the strengths of the corresponding base classifi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
51
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 113 publications
(51 citation statements)
references
References 22 publications
0
51
0
Order By: Relevance
“…This first technique is one of the simplest and most widely used aggregation method in pairwise learning [30]. The final class is assigned by computing the maximum vote by rows from the values of the strict fuzzy preference relation P.…”
Section: Case 1: Voting Methodsmentioning
confidence: 99%
“…This first technique is one of the simplest and most widely used aggregation method in pairwise learning [30]. The final class is assigned by computing the maximum vote by rows from the values of the strict fuzzy preference relation P.…”
Section: Case 1: Voting Methodsmentioning
confidence: 99%
“…However, since all binary classifiers always give an output score for a query instance, those values which are not directly related with the actual class of the instance may lead to noise in the decision process, and therefore to an erroneous classification. This is known as the "non-competence classifier problem" [8], and addressing this issue can lead to the enhancement of the final system. The main hitch is that we cannot know a priori which are the competent classifiers for a given instance, but we can restrict the score matrix for a small subset of classes to whom membership is more probable.…”
Section: Dynamic Classifier Selection For One-vs-one Strategymentioning
confidence: 99%
“…In order to aggregate the output for all binary classifiers, the simplest and most widely used method in pairwise learning is applying a "Weighted Voting" (WV) [8] so that the final class is assigned by taking the maximum vote among the summation of the scores for the binary classifiers associated to the same class.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Although there are many multi-class classification problems in practice, many embedded methods have been suggested for binary classification. Many algorithms for multiclass classification decompose a multi-class classification problem into a set of multiple binary classification problems (Clark and Boswell, 1991;Anand et al, 1995;Debnath et al, 2004) and combine the outputs of the binary classifiers to construct a multi-class classifier (Friedman, 1996;Hastie and Tibshirani, 1997;Hullermeier and Vanderlooy, 2010). Similarly, multiple variable selections for binary classification problems also can be substituted for variable selection for multi-class classification.…”
Section: Introductionmentioning
confidence: 99%