2012
DOI: 10.1371/journal.pone.0041882
|View full text |Cite
|
Sign up to set email alerts
|

A Comparison of MCC and CEN Error Measures in Multi-Class Prediction

Abstract: We show that the Confusion Entropy, a measure of performance in multiclass problems has a strong (monotone) relation with the multiclass generalization of a classical metric, the Matthews Correlation Coefficient. Analytical results are provided for the limit cases of general no-information (n-face dice rolling) of the binary classification. Computational evidence supports the claim in the general case.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
184
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 325 publications
(185 citation statements)
references
References 29 publications
1
184
0
Order By: Relevance
“…The MCC measure was originally extended to the multi-class problem in [14]. Recently, and following a comparison between MCC and Confusion Entropy [40] reported in [17], MCC was recommended as an optimal tool for practical tasks, since it presents a good trade-off among discriminatory ability, consistency and coherent behavior with varying number of classes, unbalanced datasets and randomization.…”
Section: Performance Assessment Measuresmentioning
confidence: 99%
“…The MCC measure was originally extended to the multi-class problem in [14]. Recently, and following a comparison between MCC and Confusion Entropy [40] reported in [17], MCC was recommended as an optimal tool for practical tasks, since it presents a good trade-off among discriminatory ability, consistency and coherent behavior with varying number of classes, unbalanced datasets and randomization.…”
Section: Performance Assessment Measuresmentioning
confidence: 99%
“…Several studies related to data mining and classification have been established to propose and compare various classifiers and to show their prediction quality and performance [26] [27], [28]. Jurman et al [29] shows that the Matthews Correlation Coefficient (MCC) method gives reliable results about the quality of binary and multi-class prediction classifiers. Trust evaluation models are a kind of classifier where agents are classified based on their trustworthiness.…”
Section: Overall Architecturementioning
confidence: 99%
“…The MCC is calculated from the confusion matrix [34]. The FFS was initialized by training a predictor using each feature separately.…”
Section: Feature Selection and Trainingmentioning
confidence: 99%