2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00727
|View full text |Cite
|
Sign up to set email alerts
|

FSCE: Few-Shot Object Detection via Contrastive Proposal Encoding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
279
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 327 publications
(280 citation statements)
references
References 26 publications
1
279
0
Order By: Relevance
“…The second category of classification-based methods is contrastive learning. Bo Sun et al [70] propose a few-shot object detection method via contrastive proposal encoding (FSCE). FSCE first proposes a stronger baseline than TFA which adapts better to novel data.…”
Section: Transfer-learning Methodsmentioning
confidence: 99%
“…The second category of classification-based methods is contrastive learning. Bo Sun et al [70] propose a few-shot object detection method via contrastive proposal encoding (FSCE). FSCE first proposes a stronger baseline than TFA which adapts better to novel data.…”
Section: Transfer-learning Methodsmentioning
confidence: 99%
“…Another line of work focuses on improving the proposal generation process, by introducing attention mechanisms and generating class-aware features for classifiers [7,17,24,35,44,48]. Some work modifies the proposal ranking process and ranks proposals based on similarity with query images [7,17].…”
Section: Related Workmentioning
confidence: 99%
“…An RPN ensemble method is proposed to avoid missing highly informative proposal boxes [48]. Contrastive-aware object proposal encodings are further learned to reduce the possibility of misclassifying novel class objects to confusable classes [35]. Additional information has also been shown helpful, such as semantic relations [50] and multi-scale representations [43].…”
Section: Related Workmentioning
confidence: 99%
“…Contrastive learning relies on both positive and negative examples of a class during training. This type of learning has shown to be a successful auxiliary task independently for the few-shot paradigm [3,6,16,17,19,21,25,29,34,47] and for segmentation tasks [14,40]. Inspired by prior work, we offer the first examination of using contrastive learning for few-shot semantic segmentation with a fine-tuning approach.…”
Section: Related Workmentioning
confidence: 99%