Imbalanced Learning 2013
DOI: 10.1002/9781118646106.ch1
|View full text |Cite
|
Sign up to set email alerts
|

Introduction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
33
0
2

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 43 publications
(35 citation statements)
references
References 42 publications
0
33
0
2
Order By: Relevance
“…We evaluated the algorithm with test-set AUC, area under the precision-recall curve (AUC-PR), sensitivity, specificity, and metrics that combine sensitivity and specificity that are commonly used on imbalanced classification problems: F-score and Youden index . We also present the positive predictive value and negative predictive value.…”
Section: Methodsmentioning
confidence: 99%
“…We evaluated the algorithm with test-set AUC, area under the precision-recall curve (AUC-PR), sensitivity, specificity, and metrics that combine sensitivity and specificity that are commonly used on imbalanced classification problems: F-score and Youden index . We also present the positive predictive value and negative predictive value.…”
Section: Methodsmentioning
confidence: 99%
“…While the selection of lesions from the Dermofit dataset does not present an extreme imbalance between benign and malignant tumours (≈29:20), when comparing between individual samples, this imbalance increases greatly (≈97:13 in the worst of cases). From this perspective, the present study chose to use evaluation metrics less susceptible to changes in sample balance [ 55 ], namely, Accuracy, Precision, Recall, the F1 Score, and the Area Under the precision–recall Curve (AUC). Each of these metrics, except for AUC, were calculated using confusion matrices, measuring the ratio of correctly classified individuals (True Positive & True Negative), as well as miss-classified individuals (False Positive & False Negative).…”
Section: Methodsmentioning
confidence: 99%
“…Additionally, the area under the receiver operating characteristic curve (ROC) is used to evaluate several thresholds between Recall and the false-positive rate (FPR), defined as FP/ (FP+TN). The accuracy of the detection results (F1-score) is assessed using the Precision vs Recall curve, which focuses on evaluating the performance of a classifier for different probability thresholds on the minority class (He and Ma, 2013).…”
Section: Evaluation Metricsmentioning
confidence: 99%