2022
DOI: 10.20944/preprints202209.0277.v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Early Retrieval Problem and Link Prediction Evaluation via the Area Under the Magnified ROC

Abstract: Link prediction is an unbalanced early retrieval problem, whose goal is to prioritize a small cohort of positive links on top of a list largely populated by unlabelled links. Differently from binary classification, here the evaluation focuses on how the predictor prioritizes the positive class because, in practice, a negative class does not exist. Previous studies explained that AUC-ROC is not apt for unbalanced class problems and is misleading for early retrieval problems, therefore standard AUC-ROC is not ap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 39 publications
0
9
0
Order By: Relevance
“… 1 about near optimality are based on evaluations using AUC-ROC, although several recent studies have shown that AUC-ROC is inappropriate for early retrieval problems and for unbalanced prediction tasks, 9 , 14 , 15 , 16 , 17 , 18 , 19 , 20 such as link prediction. A recent study of Muscoloni and Cannistraci 21 proposes the AUC-mROC that we report in the main figures of this study (in lieu of AUC-ROC whose values are provided for completeness in Tables 1 and 2 of this study), which adjusts the evaluation issues largely commented in the literature about the AUC-ROC. In conclusion, the precision-based results in Table 1 of Ghasemian et al., 1 as well as the results in our study, are rising concrete concerns about the near optimality of stacking models when the link prediction task is evaluated according to more appropriate evaluation measures.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“… 1 about near optimality are based on evaluations using AUC-ROC, although several recent studies have shown that AUC-ROC is inappropriate for early retrieval problems and for unbalanced prediction tasks, 9 , 14 , 15 , 16 , 17 , 18 , 19 , 20 such as link prediction. A recent study of Muscoloni and Cannistraci 21 proposes the AUC-mROC that we report in the main figures of this study (in lieu of AUC-ROC whose values are provided for completeness in Tables 1 and 2 of this study), which adjusts the evaluation issues largely commented in the literature about the AUC-ROC. In conclusion, the precision-based results in Table 1 of Ghasemian et al., 1 as well as the results in our study, are rising concrete concerns about the near optimality of stacking models when the link prediction task is evaluated according to more appropriate evaluation measures.…”
Section: Resultsmentioning
confidence: 99%
“…AUC-mROC can adjust the AUC-ROC for evaluation of early retrieval problems in general and link prediction in particular. 21 For this reason, we prefer to report AUC-mROC in the main figures of this study, and AUC-ROC in the Tables 1 and 2 .…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Well-know threshold-free metrics include BP [31], AUC [34], AUPR [35], and NDCG [36]. This work will also analyze a recently proposed metric called AUC-mROC [28]. BP represents the intersection of the Precision@k and Recall@k curves, specifically when the threshold k equals the size of the testing set (i.e.…”
Section: Threshold-free Metricsmentioning
confidence: 99%
“…Additionally, Lobo et al [27] also questioned AUC and offered reasons against its use. In the pursuit of how to evaluate algorithms for imbalanced classification and a nuanced characterization of the differences between algorithms, some researchers introduced innovative evaluation metrics, such as area under the magnified receiver operating characteristic (AUC-mROC) [28].…”
Section: Introductionmentioning
confidence: 99%