2021
DOI: 10.48550/arxiv.2104.08736
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Stochastic Optimization of Areas UnderPrecision-Recall Curves with Provable Convergence

Abstract: Areas under ROC (AUROC) and precision-recall curves (AUPRC) are common metrics for evaluating classification performance for imbalanced problems. Compared with AUROC, AUPRC is a more appropriate metric for highly imbalanced datasets. While direct optimization of AUROC has been studied extensively, optimization of AUPRC has been rarely explored. In this work, we propose a principled technical method to optimize AUPRC for deep learning. Our approach is based on maximizing the averaged precision (AP), which is an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
20
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(20 citation statements)
references
References 23 publications
0
20
0
Order By: Relevance
“…On the other hand, directly optimizing AUPRC is generally intractable due to the involved complicated integral operation. To mitigate this issue, most of the existing works seek to optimize certain estimator of AUPRC [4,42,43]. In this paper, we focus on maximizing average precision (AP), which is one of the most commonly used estimators in practice for the purpose of maximizing AUPRC.…”
Section: Introductionmentioning
confidence: 99%
See 4 more Smart Citations
“…On the other hand, directly optimizing AUPRC is generally intractable due to the involved complicated integral operation. To mitigate this issue, most of the existing works seek to optimize certain estimator of AUPRC [4,42,43]. In this paper, we focus on maximizing average precision (AP), which is one of the most commonly used estimators in practice for the purpose of maximizing AUPRC.…”
Section: Introductionmentioning
confidence: 99%
“…Although a few studies have tried to optimize AP for AUPRC optimization [4,5,6,24,39,43,45], most of them are heuristic driven and do not provide any convergence guarantee. Recently, Qi et al [42] made a breakthrough towards optimizing a differentiable surrogate loss of AP with provable convergence guarantee. They cast the objective as a sum of non-convex compositional functions, and propose a principled stochastic method named SOAP for solving the special optimization problem.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations