2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00300
|View full text |Cite
|
Sign up to set email alerts
|

Bounding Box Regression With Uncertainty for Accurate Object Detection

Abstract: Figure 1: In object detection datasets, the ground-truth bounding boxes have inherent ambiguities in some cases. The bounding box regressor is expected to get smaller loss from ambiguous bounding boxes with our KL Loss. (a)(c) The ambiguities introduced by inaccurate labeling. (b) The ambiguities introduced by occlusion. (d) The object boundary itself is ambiguous. It is unclear where the left boundary of the train is because the tree partially occludes it. (better viewed in color) AbstractLarge-scale object d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
318
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 502 publications
(320 citation statements)
references
References 47 publications
2
318
0
Order By: Relevance
“…The class which has more samples in a dataset or mini batch during training in the context of class imbalance. [17] and in KL loss [54] for Smooth L1 Loss), while some methods such as GIoU Loss [55] directly predict the bounding box coordinates. For the sake of clarity, we usex to denote the regression loss input for any method.…”
Section: Over-represented Classmentioning
confidence: 99%
See 1 more Smart Citation
“…The class which has more samples in a dataset or mini batch during training in the context of class imbalance. [17] and in KL loss [54] for Smooth L1 Loss), while some methods such as GIoU Loss [55] directly predict the bounding box coordinates. For the sake of clarity, we usex to denote the regression loss input for any method.…”
Section: Over-represented Classmentioning
confidence: 99%
“…Recent regression loss functions have different motivations: (i) Balanced L1 Loss [29] increases the contribution of the inliers. (ii) KL Loss [54] is motivated from the ambiguity of the positive samples. (iii) GIoU Loss [55] has the motive to use a performance metric as a loss function.…”
Section: A Regression Loss With Many Aspectsmentioning
confidence: 99%
“…Zhu et al [41] enhanced the anchor design for small objects. He et al [14] modeled the bounding The idea of anchor-free detection is not new. Dense-Box [15] first proposed a unified end-to-end fully convolutional framework that directly predicted bounding boxes.…”
Section: Related Workmentioning
confidence: 99%
“…To the best of our knowledge, the most similar work to our approach is the recent Softer-NMS [7] that also uses KL divergence for bounding box localization. However, the motivation of Softer-NMS is to model the location uncertainty of each corner of proposed bounding boxes through 1D Gaussian distribution.…”
Section: Bounding Ellipse and Kl Divergencementioning
confidence: 99%