2003
DOI: 10.1007/3-540-44886-1_25
|View full text |Cite
|
Sign up to set email alerts
|

AUC: A Better Measure than Accuracy in Comparing Learning Algorithms

Abstract: Abstract. Predictive accuracy has been widely used as the main criterion for comparing the predictive ability of classification systems (such as C4.5, neural networks, and Naive Bayes). Most of these classifiers also produce probability estimations of the classification, but they are completely ignored in the accuracy measure. This is often taken for granted because both training and testing sets only provide class labels. In this paper we establish rigourously that, even in this setting, the area under the RO… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
219
0
6

Year Published

2010
2010
2017
2017

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 343 publications
(225 citation statements)
references
References 5 publications
0
219
0
6
Order By: Relevance
“…Eclipse and Gnome bug data was used for experiments. Results were evaluated using ROC (receiver operating characteristic) curve [24] . Performance of Multinomial Naïve Bayes was found to be better than that of other classification algorithms.…”
Section: Related Workmentioning
confidence: 99%
“…Eclipse and Gnome bug data was used for experiments. Results were evaluated using ROC (receiver operating characteristic) curve [24] . Performance of Multinomial Naïve Bayes was found to be better than that of other classification algorithms.…”
Section: Related Workmentioning
confidence: 99%
“…In this study, AUC is estimated using Mann-Whitney statistic test as presented by Ling et al (2003). The AUC of a classifier G is defined as:…”
Section: Performance Metricsmentioning
confidence: 99%
“…Ling et al (2003) suggest that its use should replace accuracy when measuring and comparing classifiers: the best classifier is the one with the largest AUC value.…”
Section: Performance Metricsmentioning
confidence: 99%
“…In our study, AUC is estimated using Mann-Whitney statistic test as presented in [38]. The AUC of a classifier G is defined as:…”
Section: Gainratio(ptest) = Gain(ptest) Splitin F O(ptest)mentioning
confidence: 99%
“…The authors in [38] suggest that its use should replace accuracy when measuring and comparing classifiers: the best classifier is the one with the largest AUC; • comprehensibility: qualifies the exploitability of the produced model. For example, in a Bayesian network, the important number of a node's parents affects the identification of its strong relations with them; • classification rapidity: which also would be a crucial factor if the training dataset is huge for example.…”
Section: Gainratio(ptest) = Gain(ptest) Splitin F O(ptest)mentioning
confidence: 99%