2017
DOI: 10.48550/arxiv.1711.04043
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Few-Shot Learning with Graph Neural Networks

Abstract: We propose to study the problem of few-shot learning with the prism of inference on a partially observed graphical model, constructed from a collection of input images whose label can be either observed or not. By assimilating generic message-passing inference algorithms with their neural-network counterparts, we define a graph neural network architecture that generalizes several of the recently proposed few-shot learning models. Besides providing improved numerical performance, our framework is easily extende… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
199
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
4
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 148 publications
(200 citation statements)
references
References 15 publications
1
199
0
Order By: Relevance
“…For comparisons, different approaches were used for fault conditions with the same dataset, which are Siamese Net, Matching Net, SAE+RF [34,36]. Among them, Siamese Net is recognized as a simple and effective method, which is widely used in computer science, finance, medicine and other fields.…”
Section: Resultsmentioning
confidence: 99%
“…For comparisons, different approaches were used for fault conditions with the same dataset, which are Siamese Net, Matching Net, SAE+RF [34,36]. Among them, Siamese Net is recognized as a simple and effective method, which is widely used in computer science, finance, medicine and other fields.…”
Section: Resultsmentioning
confidence: 99%
“…Initialization based methods aim to learn good model initialization (i.e., the parameters of a network) so that the classifier for novel classes can be learned with a few labeled samples and a few gradient updated steps [38,8,9,42]. Metric learning based methods aim to learn a sophisticated comparison model to determine the similarity of two images [32,48,46,47,35,12]. Hallucination based methods learn a generator from samples in the base classes and use the learned generator to hallucinate new novel class samples for data augmentation [14,43,55,5,4].…”
Section: Few Shot Learningmentioning
confidence: 99%
“…As shown in Figure 2, our framework integrates an object-level GCN and an image-level GCN for visual feature encoding. The first GCN focuses on objects in local regions to explore spatial objectlevel relation [6,39,53,61], while the second one targets on the image-level similarity relation [12,47] amongst multiple similar images. We calculate the similarity of different images and select the images with high similarity as a meaningful complementary global visual embedding to ensure a more reasonable and accurate text description generation.…”
Section: Introductionmentioning
confidence: 99%