2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01462
|View full text |Cite
|
Sign up to set email alerts
|

Seeing without Looking: Contextual Rescoring of Object Detections for AP Maximization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(4 citation statements)
references
References 19 publications
0
4
0
Order By: Relevance
“…In addition, the greedy merging strategy is sub-optimal, which may lead to false detections. To remedy this, contextual information [44] and a graph neural network can be employed to better re-score the confidence of the detection bounding boxes.…”
Section: Discussionmentioning
confidence: 99%
“…In addition, the greedy merging strategy is sub-optimal, which may lead to false detections. To remedy this, contextual information [44] and a graph neural network can be employed to better re-score the confidence of the detection bounding boxes.…”
Section: Discussionmentioning
confidence: 99%
“…Pyra-midBox [4] created a brand-new context anchor to supervise the semi-supervised learning of high-level contextual features. By rescoring the confidences of a detection after post-processing the output of a random detector, Pato et al [30] propose a technique for incorporating context in object detection.…”
Section: Context Informationmentioning
confidence: 99%
“…Fu et al modeled and inferred the inherent semantics and spatial layout relationship between objects and retained the spatial information as much as possible when extracting the semantic features of small objects [20]. Pato et al proposed a rescoring algorithm based on contextual rescoring, which used RNNs (Recurrent Neural Networks) and self-attention mechanisms to transfer information between candidate regions and generate contextual representation, and used the obtained context information to perform a secondary evaluation of the detection results [21].…”
Section: Contextual Learningmentioning
confidence: 99%