2010
DOI: 10.1016/j.eswa.2009.11.040
|View full text |Cite
|
Sign up to set email alerts
|

A novel measure for evaluating classifiers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
45
0
2

Year Published

2013
2013
2020
2020

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 57 publications
(47 citation statements)
references
References 22 publications
0
45
0
2
Order By: Relevance
“…The MCC measure was originally extended to the multi-class problem in [14]. Recently, and following a comparison between MCC and Confusion Entropy [40] reported in [17], MCC was recommended as an optimal tool for practical tasks, since it presents a good trade-off among discriminatory ability, consistency and coherent behavior with varying number of classes, unbalanced datasets and randomization.…”
Section: Performance Assessment Measuresmentioning
confidence: 99%
“…The MCC measure was originally extended to the multi-class problem in [14]. Recently, and following a comparison between MCC and Confusion Entropy [40] reported in [17], MCC was recommended as an optimal tool for practical tasks, since it presents a good trade-off among discriminatory ability, consistency and coherent behavior with varying number of classes, unbalanced datasets and randomization.…”
Section: Performance Assessment Measuresmentioning
confidence: 99%
“…As one can find in [3], the measure of confusion entropy was shown to be more precise than accuracy, for it exploits the class distribution information of misclassifications of all classes. It was also shown to be more precise than RCI (NMI), for it takes into consideration the accuracy of classifiers, as well.…”
Section: Accmentioning
confidence: 99%
“…It also encompasses the measures of the F-score, the sensitivity-PPA (Positive predictive accuracy) average and the sensitivity-PPA product [6], the AUC defined by one run (AUC b ), which is also called balanced accuracy, Youden's index [7,8], which has linear correspondence with AUC b , the odds ratio or cross-product, the discriminant power [9], which can be computed directly from the odds ratio, the likelihood, Cohen's kappa [10,11], relative classifier information (RCI) [12,13], normalized mutual information (NMI) [6,14], Matthews Correlation Coefficient (MCC) [15], the mean F-measure [16], macro average arithmetic [17], macro average geometric [5], etc. Confusion entropy [3], CEN for short, also belongs to this group. All these measures can be computed based on a confusion matrix.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations