2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00969
|View full text |Cite
|
Sign up to set email alerts
|

Variational Prototyping-Encoder: One-Shot Learning With Prototypical Images

Abstract: In daily life, graphic symbols, such as traffic signs and brand logos, are ubiquitously utilized around us due to its intuitive expression beyond language boundary. We tackle an open-set graphic symbol recognition problem by one-shot classification with prototypical images as a single training example for each novel class. We take an approach to learn a generalizable embedding space for novel tasks. We propose a new approach called variational prototyping-encoder (VPE) that learns the image translation task fr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
56
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 63 publications
(56 citation statements)
references
References 22 publications
0
56
0
Order By: Relevance
“…Since the prototypical network [9] has shown better performance than more complicated few-shot learning models [6][7][8], there have been a number of extensions based on the same structure shown in Figure 1 [10,[13][14][15]. Sung et al [10] added the relation module after the embedding module for more fine-grained classification.…”
Section: Few-shot Classification Problemmentioning
confidence: 99%
See 3 more Smart Citations
“…Since the prototypical network [9] has shown better performance than more complicated few-shot learning models [6][7][8], there have been a number of extensions based on the same structure shown in Figure 1 [10,[13][14][15]. Sung et al [10] added the relation module after the embedding module for more fine-grained classification.…”
Section: Few-shot Classification Problemmentioning
confidence: 99%
“…Based on the prototypical networks, Wertheimer et al [14] use a concatenation of foreground and background vector representations as feature vectors. Kim et al [15] introduced the variational autoencoder (VAE) structure [31] into the embedding module for training the prototype images.…”
Section: Few-shot Classification Problemmentioning
confidence: 99%
See 2 more Smart Citations
“…These works often adopt learning-to-learn paradigm that distills knowledge learned from training categories to help learn novel concepts. For example, (Vinyals et al 2016;Snell, Swersky, and Zemel 2017;Sung et al 2018;Kim et al 2019) learn an embedding and metric function from the base categories to well recognize samples in the novel categories. Most of these works evaluate their algorithms on some small-scale datasets, e.g., miniImageNet with 64 base categories, 16 validation categories, 20 novel categories.…”
Section: Related Workmentioning
confidence: 99%