2019
DOI: 10.2139/ssrn.3350582
|View full text |Cite
|
Sign up to set email alerts
|

When to Consult Precision-Recall Curves

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(14 citation statements)
references
References 17 publications
0
14
0
Order By: Relevance
“…Receiver operating characteristic (ROC) curves were plotted using Xpert test results as the reference standard and areas under the curve (ROC AUCs) were calculated. In addition, we calculated the area under the precision-recall curve (PR AUC), due to the test library’s low overall Xpert positivity rate 48 . We then identified two cut-off thresholds to transform the continuous abnormality score of each CAD software into dichotomous normal/abnormal interpretations that matched the sensitivity achieved by the Expert and Intermediate Readers.…”
Section: Methodsmentioning
confidence: 99%
“…Receiver operating characteristic (ROC) curves were plotted using Xpert test results as the reference standard and areas under the curve (ROC AUCs) were calculated. In addition, we calculated the area under the precision-recall curve (PR AUC), due to the test library’s low overall Xpert positivity rate 48 . We then identified two cut-off thresholds to transform the continuous abnormality score of each CAD software into dichotomous normal/abnormal interpretations that matched the sensitivity achieved by the Expert and Intermediate Readers.…”
Section: Methodsmentioning
confidence: 99%
“…Nonetheless, Fawcett et al 35 advocated the use of ROC because it is insensitive to changes in the prevalence of the outcome. Cook and Ramadas 36 explained that if the primary goal, as relevant to pharmacovigilance, is to maximize sensitivity, by identifying all of the positive cases, ROC curves may still be preferable.…”
Section: Discussionmentioning
confidence: 99%
“…Similar to the ROC AUC score, the area under the PR curve (PR AUC) summarizes the model's performance, where values closer to one indicate better classifiers. As Cook and Ramadas [39] states, the ROC curve is preferred when we care more about identifying a high percentage of the positives, whereas the PR AUC ensures that the positive predicted instances are mainly positive. Here we employ both metrics as they provide somewhat complementary information about model's predictive potential.…”
Section: Predictive Potentialmentioning
confidence: 99%