Computer Science &Amp; Information Technology 2020
DOI: 10.5121/csit.2020.100801
|View full text |Cite
|
Sign up to set email alerts
|

Multi-label Classifier Performance Evaluation with Confusion Matrix

Abstract: Confusion matrix is a useful and comprehensive presentation of the classifier performance. It is commonly used in the evaluation of multi-class, single-label classification models, where each data instance can belong to just one class at any given point in time. However, the real world is rarely unambiguous and hard classification of data instance to a single class, i.e. defining its properties with single distinctive feature, is not always possible. For example, an image can contain multiple objects and regio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
50
0
6

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 87 publications
(56 citation statements)
references
References 16 publications
0
50
0
6
Order By: Relevance
“…Figure 3 shows the results of the twelve-lead ECG classifier in the form of a multi-class confusion matrix, modified for multi-label problems according to [8]. This awards normalized scores to each class combination such that correct classifications are scored only on the leading diagonal, with misclassifications elsewhere.…”
Section: Resultsmentioning
confidence: 99%
“…Figure 3 shows the results of the twelve-lead ECG classifier in the form of a multi-class confusion matrix, modified for multi-label problems according to [8]. This awards normalized scores to each class combination such that correct classifications are scored only on the leading diagonal, with misclassifications elsewhere.…”
Section: Resultsmentioning
confidence: 99%
“…Performance is calculated for each data instance and averaged over the whole dataset with example-based metrics. On the other hand, label-based metrics measure each label's performance separately before averaging across classes [ 59 ]. Hence, in this study, label-based macro-averaged accuracy, AUC, recall, precision, and F 1 score are used to evaluate the performance of the proposed model.…”
Section: Methodsmentioning
confidence: 99%
“…Several formulas or equations in the performance evaluation measure are usually applied separately or in combination to get a better performance analysis perspective. Some of the calculations contained in the performance evaluation are as follows [46]. The precision method calculates the level of accuracy or accuracy of the results between user testing and system answers.…”
Section: Methodsmentioning
confidence: 99%