2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00890
|View full text |Cite
|
Sign up to set email alerts
|

On the Importance of Distractors for Few-Shot Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(12 citation statements)
references
References 23 publications
0
12
0
Order By: Relevance
“…Typically, CD-FSL with only source data is the most strict setting that demands model to recognize totally unseen target dataset without any target information. Flagship works including FWT [44], BSCD-FSL [18], LRP [40], ATA [48], wave-SAN [14], RDC [29], NSAE [32], and ConFT [10]. Though many well-designed techniques e.g.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Typically, CD-FSL with only source data is the most strict setting that demands model to recognize totally unseen target dataset without any target information. Flagship works including FWT [44], BSCD-FSL [18], LRP [40], ATA [48], wave-SAN [14], RDC [29], NSAE [32], and ConFT [10]. Though many well-designed techniques e.g.…”
Section: Related Workmentioning
confidence: 99%
“…Though many well-designed techniques e.g. readjusting the batch normalization [44], augmenting the difficult of meta tasks [48], spanning style distributions [14], and even fine-tuning models using few target images during the testing stage [10,18,29,32], the performances of them are still greatly limited due to the huge domain gap. By contrast, STARTUP [33] relaxes this strict setting and uses unlabeled target data for training.…”
Section: Related Workmentioning
confidence: 99%
“…According to the way they construct and choose the positive samples, it can be roughly divided into supervised [53,[55][56][57] and self-supervised method [58,59]. Recently, there have been multiple methods, which utilise contrastive learning in few-shot image classification to help improve the performance [18,19,60,61]. While they generally use it to learn a good backbone on base set or task-specific representation, our method utilise it to obtain a triplet network, which projects all the features into a more discriminative space.…”
Section: Contrastive Learningmentioning
confidence: 99%
“…To reduce the overlapping, many methods [18, 19] introduce contrastive learning, which is proved effective to pull features from the same class together and push away that from different classes. However, these methods generally focus on boosting the performance of the backbone network, which helps little when extracting features of novel samples.…”
Section: Introductionmentioning
confidence: 99%
“…The basic idea is to mitigate the domain gap through data augmentation. Except for source domain, Das et al [6] and Liang et al [22] further fine-tune their models on unlabeled data in target domain via self-or semi-supervised methods, while paper [8] demonstrates the effectiveness of using very few labeled target data. Considering the acceptable cost of limited labeled data in practice, we advocate this direction.…”
Section: Introductionmentioning
confidence: 99%