2020
DOI: 10.1007/978-3-030-58548-8_26
|View full text |Cite
|
Sign up to set email alerts
|

Negative Margin Matters: Understanding Margin in Few-Shot Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
110
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 244 publications
(114 citation statements)
references
References 22 publications
4
110
0
Order By: Relevance
“…• Extensive experiments on two popular standard fewshot classification benchmark datasets-general object dataset mini-ImageNet and fine-grained dataset Caltech-UCSD Birds-200-2011 (CUB), show that the proposed method surpasses some state-of-the-art methods [17]- [21], [28], [34], [37], [44], and validate the feasibility of our model.…”
Section: Introductionmentioning
confidence: 61%
See 1 more Smart Citation
“…• Extensive experiments on two popular standard fewshot classification benchmark datasets-general object dataset mini-ImageNet and fine-grained dataset Caltech-UCSD Birds-200-2011 (CUB), show that the proposed method surpasses some state-of-the-art methods [17]- [21], [28], [34], [37], [44], and validate the feasibility of our model.…”
Section: Introductionmentioning
confidence: 61%
“…We implement experiments on mini-ImageNet and CUB datasets, and compare our model with a series of current prevailing models, including Matching Networks [10], MAML [13], Prototypical Networks [17], Relation Networks [18], TADAM [19], MetaOptNet [37], Baseline++ [20], Meta-Baseline [21], DSN [34], E 3 BM [44], Neg-Cosine [28]. The experimental results are listed in Table 2 and Table 3, respectively.…”
Section: Comparison With the State-of-the-artsmentioning
confidence: 99%
“…Manifold mixup [12] shows that the pre-training and fine-tuning pipeline can achieve competitive performance with few-shot classification. Negative Matters [4] help discriminate against fiction classes by avoiding wrong mapping samples of the same novel class to multiple tasks. MML [5] calculates the multi-level similarity to capture more information.…”
Section: Meta-learning Based Few-shot Learning Methodsmentioning
confidence: 99%
“…As pointed by [12,19,20], the major challenge in few-shot learning is to learn more discriminative and transferable image features for novel classes. However, these prevailing methods are still sensitive to background clutter, especially when the number of training samples per category is limited.…”
Section: Sgca For Generalizable Representation Learningmentioning
confidence: 99%