2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00760
|View full text |Cite
|
Sign up to set email alerts
|

Low-Shot Learning from Imaginary Data

Abstract: Humans can quickly learn new visual concepts, perhaps because they can easily visualize or imagine what novel objects look like from different views. Incorporating this ability to hallucinate novel instances of new concepts might help machine vision systems perform better low-shot learning, i.e., learning concepts from few examples. We present a novel approach to low-shot learning that uses this idea. Our approach builds on recent progress in meta-learning ("learning to learn") by combining a meta-learner with… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
473
0
1

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 704 publications
(475 citation statements)
references
References 18 publications
1
473
0
1
Order By: Relevance
“…with current ML techniques. However, advances in low‐shot learning (e.g., Wang, Girshick, Hebert, & Hariharan, ) may allow our annotations to be used to create such a fine‐grained classifier in the future.…”
Section: Methodsmentioning
confidence: 99%
“…with current ML techniques. However, advances in low‐shot learning (e.g., Wang, Girshick, Hebert, & Hariharan, ) may allow our annotations to be used to create such a fine‐grained classifier in the future.…”
Section: Methodsmentioning
confidence: 99%
“…Few-shot learning. There is a broad array of few-shot learning approaches, including, among many: gradient descent-based approaches [1,11,38,44], which learn how to rapidly adapt a model to a given few-shot recognition task via a small number of gradient descent iterations; metric learning based approaches that learn a distance metric be-tween a query, i.e., test image, and a set of support images, i.e., training images, of a few-shot task [26,52,54,56,58]; methods learning to map a test example to a class label by accessing memory modules that store training examples for that task [12,25,34,37,49]; approaches that learn how to generate the weights of a classifier [13,16,42,43] or of a multi-layer neural network [3,18,19,57] for the new classes given the few available training data for each of them; methods that "hallucinate" additional examples of a class from a reduced amount of data [20,56].…”
Section: Related Workmentioning
confidence: 99%
“…In terms of a generative model, Wang et al [26] generated data that has similar characteristics to the training examples for novel categories. Similarly, Hariharan and Girshick [27] presented an example generation function where transformations learned from base categories are applied to the examples of novel categories.…”
Section: Related Workmentioning
confidence: 99%
“…To assess the benefit of our finetuning strategy, we show the accuracy when not fine-tuning the network. In this case, the weights of novel categories are [26] AND WE REFERRED TO [19]. Both Both with prior k=1 2 5 10 20 k=1 2 5 10 20 k=1 2 5 10 20…”
Section: Ablation Studymentioning
confidence: 99%