2015
DOI: 10.1007/s10462-015-9433-y
|View full text |Cite
|
Sign up to set email alerts
|

Dealing with the evaluation of supervised classification algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
77
0
9

Year Published

2016
2016
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 109 publications
(86 citation statements)
references
References 93 publications
0
77
0
9
Order By: Relevance
“…In particular, in the machine learning field, we may enumerate several kinds of solutions to resolve the problems mentioned above. We may justify a statistic-based approach [57], and more general approach to build ranking method (a ranking-based approach) [2,40,41,79].…”
Section: A2 the Indicators And Methods To Evaluate And Comparison Ofmentioning
confidence: 99%
“…In particular, in the machine learning field, we may enumerate several kinds of solutions to resolve the problems mentioned above. We may justify a statistic-based approach [57], and more general approach to build ranking method (a ranking-based approach) [2,40,41,79].…”
Section: A2 the Indicators And Methods To Evaluate And Comparison Ofmentioning
confidence: 99%
“…The MCMC procedure is configured to carry out bi=100 samples for burn‐in and s=1000 samples for calculating the expected values. All the experiments have been validated using a 10 × 5 fold cross validation (CV) …”
Section: Methodsmentioning
confidence: 99%
“…VIC takes the cluster indexes as class labels and trains an ensemble of supervised classifiers that are then tested using cross‐validation. The average area under the curve (AUC; Santafe et al, ) is the output of the index. The general idea is that the classification performance improves with the quality of the input partition.…”
Section: Related Workmentioning
confidence: 99%
“…Receiver operating characteristic curves are used to evaluate classifiers with continuous outputs that differentiate between two outcomes; they present how the classifier performs in all the range of possible thresholds (Fawcett & 861–874, ). AUC is less sensitive to cost and class imbalance than other nonbalanced scores like classification error; however, it has the limitation that it treats the cost of misclassification differently for each classification algorithm (Santafe et al, ). Nonetheless, AUC is the most used measure for class imbalance problems, (Loyola‐Gonzáez et al, ) and compared with other measures, such as geometric mean or f‐measure, AUC measures more reliably the capability to correctly predict the label of a new object (Japkowicz & Shah, ; Japkowicz, ).…”
Section: Experimental Setup: a Masquerade Detection Settingmentioning
confidence: 99%