2020
DOI: 10.1109/tnnls.2019.2957187
|View full text |Cite
|
Sign up to set email alerts
|

Few-Shot Learning With Geometric Constraints

Abstract: In this paper, we consider the problem of few-shot learning for classification. We assume a network trained for base categories with a large number of training examples, and we aim to add novel categories to it that have only a few, e.g., one or five, training examples. This is a challenging scenario because (1) high performance is required in both the base and novel categories, and (2) training the network for the new categories with few training examples can contaminate the feature space trained well for the… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 54 publications
(20 citation statements)
references
References 27 publications
0
17
0
Order By: Relevance
“…[24] additionally learns a category-agnostic mapping to transform the mean-sample representation to its class-prototype representation, considering the prototypes in Prototypical Networks [13] may be inaccurate. Furthermore, finetuning with geometric constraints is applied in [25] to force intra-class similarity and inter-class disparity. [26] transfers knowledge from base classes to novel classes by embedding image features as class adapting principal directions, instead of the vector representation adopted by previous works.…”
Section: Related Work a Few-shot Learningmentioning
confidence: 99%
“…[24] additionally learns a category-agnostic mapping to transform the mean-sample representation to its class-prototype representation, considering the prototypes in Prototypical Networks [13] may be inaccurate. Furthermore, finetuning with geometric constraints is applied in [25] to force intra-class similarity and inter-class disparity. [26] transfers knowledge from base classes to novel classes by embedding image features as class adapting principal directions, instead of the vector representation adopted by previous works.…”
Section: Related Work a Few-shot Learningmentioning
confidence: 99%
“…Probably because physicians can use the experience from themselves to recognize them, 22,23 and the network cannot. Moreover, is not this one of the mechanisms of meta‐learning? 24 We may use this mechanism to address the issue of limited deep‐learning‐based models with the lack of training samples. So, why don't we use the principle of meta‐learning to build a network?…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, is not this one of the mechanisms of meta-learning? 24 We may use this mechanism to address the issue of limited deep-learning-based models with the lack of training samples. So, why don't we use the principle of meta-learning to build a network?…”
Section: Introductionmentioning
confidence: 99%
“…Few-shot learning aims to solve the data scarcity issue, which can recognize novel categories effectively with only a handful of labeled samples by leveraging the prior knowledge learned from previous categories. Most few-shot learning studies concentrate on computer vision domain (Fei-Fei et al, 2006;Finn et al, 2017;Jung and Lee, 2020). Recently, to handle various new or unacquainted intents popped up quickly from different domains, some few-shot IC/SF models are proposed (Geng et al, 2020;Hou et al, 2020).…”
Section: Introductionmentioning
confidence: 99%