2023
DOI: 10.1109/tpami.2022.3185311
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing Two-Way Partial AUC With an End-to-End Framework

Abstract: The Area Under the ROC Curve (AUC) is a crucial metric for machine learning, which evaluates the average performance over all possible True Positive Rates (TPRs) and False Positive Rates (FPRs). Based on the knowledge that a skillful classifier should simultaneously embrace a high TPR and a low FPR, we turn to study a more general variant called Two-way Partial AUC (TPAUC), where only the region with TPR ≥ α, FPR ≤ β is included in the area. Moreover, a recent work shows that the TPAUC is essentially inconsist… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 50 publications
0
9
0
Order By: Relevance
“…We conduct numerical experiments on both synthetic and real-world datasets to validate its utility and the fairness performance. One future direction is to consider fairness metrics involving group-level partial AUCs (Narasimhan and Agarwal 2017;Yang et al 2021b).…”
Section: Discussionmentioning
confidence: 99%
“…We conduct numerical experiments on both synthetic and real-world datasets to validate its utility and the fairness performance. One future direction is to consider fairness metrics involving group-level partial AUCs (Narasimhan and Agarwal 2017;Yang et al 2021b).…”
Section: Discussionmentioning
confidence: 99%
“…Hence, they propose a new metric named Two-way Partial AUC (TPAUC), which pays attention to the upper-left head region under the ROC curve. Then, [40] first proposes an end-toend TPAUC optimization framework, which has a profound impact on subsequent work [41]. Nevertheless, TPAUC does not align with the Top-K ranking metrics in the recommendation.…”
Section: Partial Auc and Its Optimizationmentioning
confidence: 99%
“…Regarding the optimization of partial AUC, previous works [7,21,24,26] rely on full-batch optimization and the approximation of the Top (Bottom)-K ranking, leading to immeasurable biases and inefficiency. Recently, novel end-to-end mini-batch optimization frameworks have been proposed [40,42,44]. These methods can be extended to optimize our proposed LLPAUC metric.…”
Section: Partial Auc and Its Optimizationmentioning
confidence: 99%
“…Nevertheless, these algorithms are not scalable and applicable to deep learning. More recently, [38] considers two-way partial AUC maximization and simplifies the optimizing problem for large scale optimization. [41] proposes new formulations of Partial AUC surrogate objectives using distributionally robust optimization (DRO).…”
Section: Partial Auc Maximizationmentioning
confidence: 99%