2009
DOI: 10.1016/j.patrec.2008.08.010
|View full text |Cite
|
Sign up to set email alerts
|

An experimental comparison of performance measures for classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

7
437
0
13

Year Published

2011
2011
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 753 publications
(457 citation statements)
references
References 29 publications
7
437
0
13
Order By: Relevance
“…However, in general terms the global measures show similar ranks. This is consistent with the presented in Reference [8]. Table 2 shows some interesting results.…”
Section: Resultssupporting
confidence: 92%
See 2 more Smart Citations
“…However, in general terms the global measures show similar ranks. This is consistent with the presented in Reference [8]. Table 2 shows some interesting results.…”
Section: Resultssupporting
confidence: 92%
“…Furthermore, a quantitative representation of a ROC curve is the area under it, which is known as AUC [9]. The AUC measure has been adapted at multi-class problems [8] and can be defined as follow.…”
Section: Mean F-measure (Mfm)mentioning
confidence: 99%
See 1 more Smart Citation
“…AUC can vary between 0 and 1; AUC = 0.5 denotes random guessing while 1.0 indicates perfect accuracy. We use the "one versus the rest" method [68] to extend binary ROC into the three-class classification problem of thermal preference. The overall performance of a thermal preference classifier is computed by averaging AUC of the ROC curves for all three classes.…”
Section: Performance Evaluationmentioning
confidence: 99%
“…-Rule accuracy Improvement (RI): Rule accuracy is defined as the percentage of patterns correctly classified by the rule, i.e., those covered by its antecedent that belong to the predicted class (true positives -TP) and those not covered that belong to a different class (true negatives -TN) (Equation 1; NP is the number of patterns). This metric is the common accuracy measure for binary classification [16,17]. According to this, new conditions result in an accuracy improvement of the rule, but not necessarily of the global system.…”
Section: Construction Phasementioning
confidence: 99%