2021
DOI: 10.1109/tpami.2020.2991457
|View full text |Cite
|
Sign up to set email alerts
|

AP-Loss for Accurate One-Stage Object Detection

Abstract: One-stage object detectors are trained by optimizing classification-loss and localization-loss simultaneously, with the former suffering much from extreme foreground-background class imbalance issue due to the large number of anchors. This paper alleviates this issue by proposing a novel framework to replace the classification task in one-stage detectors with a ranking task, and adopting the Average-Precision loss (AP-loss) for the ranking problem. Due to its non-differentiability and non-convexity, the AP-los… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
48
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 59 publications
(48 citation statements)
references
References 63 publications
0
48
0
Order By: Relevance
“…DR Loss [31] achieves ranking between positives and negatives by enforcing a margin with Hinge Loss. Finally, AP Loss [6] and aLRP Loss [27] optimize the performance metrics, AP and LRP [26] respectively, by using the error-driven update of perceptron learning [35] for the non-differentiable parts. However, they need longer training and heavy augmentation.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…DR Loss [31] achieves ranking between positives and negatives by enforcing a margin with Hinge Loss. Finally, AP Loss [6] and aLRP Loss [27] optimize the performance metrics, AP and LRP [26] respectively, by using the error-driven update of perceptron learning [35] for the non-differentiable parts. However, they need longer training and heavy augmentation.…”
Section: Related Workmentioning
confidence: 99%
“…Recently proposed ranking-based loss functions, namely "Average Precision (AP) Loss" [6] and "average Localisation Recall Precision (aLRP) Loss" [27], offer two important advantages over the classical score-based functions (e.g. Cross-entropy Loss and Focal Loss [22]): (1) They directly optimize the performance measure (e.g.…”
Section: Introductionmentioning
confidence: 99%
“…The entire performance comparisons between the optimum InVision model and the 10 comparison methods are shown in Table 2. Particularly, InVision outperforms (1) the seven general DCNN-based approaches [33][34][35][36][37][38][39] by 23.1%-34.6% with respect to mAP and outperforms (2) the three DCNN-based ISAR recognition approaches described in [40][41][42] by 33.6%-37.1%. Moreover, all the compared methods take too long time to run (more than 18 s), while InVision costs only 0.607 s, which is 31.24 × 80.…”
Section: Overall Performance Evaluationmentioning
confidence: 99%
“…AP-Loss [36] transforms the classification task into the sorting task and minimizes the AP-Loss of the system based on the network error and its optimization algorithm. Firstly, the prediction box and score are transformed to obtain the transformation format of the prediction box and score, as shown in the following equations:…”
Section: Optimization Of Loss Functionmentioning
confidence: 99%