2022
DOI: 10.3390/s22197640
|View full text |Cite
|
Sign up to set email alerts
|

Few-Shot Fine-Grained Image Classification via GNN

Abstract: Traditional deep learning methods such as convolutional neural networks (CNN) have a high requirement for the number of labeled samples. In some cases, the cost of obtaining labeled samples is too high to obtain enough samples. To solve this problem, few-shot learning (FSL) is used. Cur-rently, typical FSL methods work well on coarse-grained image data, but not as well on fine-grained image classification work, as they cannot properly assess the in-class similarity and inter-class dif-ference of fine-grained i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 43 publications
1
4
0
Order By: Relevance
“…The SimAM module proposed by [17] directly estimates the 3-dimensional weights. This work has confirmed that it is better than [19], [20] in the convolutional network and does not bring additional calculations. Therefore, inspired by the attention mechanism in the human brain proposed in [24], this paper integrates a three-dimension based on SimAM attention module into the feature extraction network of few-shot learning to help the model extract features with more generalization ability.…”
Section: B Attention Mechanismsupporting
confidence: 66%
See 1 more Smart Citation
“…The SimAM module proposed by [17] directly estimates the 3-dimensional weights. This work has confirmed that it is better than [19], [20] in the convolutional network and does not bring additional calculations. Therefore, inspired by the attention mechanism in the human brain proposed in [24], this paper integrates a three-dimension based on SimAM attention module into the feature extraction network of few-shot learning to help the model extract features with more generalization ability.…”
Section: B Attention Mechanismsupporting
confidence: 66%
“…The few-shot image classification paradigm is shown in Figure 2. This visual task has prompted a lot of classic works to be proposed [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], the most similar to this work is metric-based methods, which complete the classification by measuring the distance between the support set and query set of the samples, such as [2] proposed a general network framework, the fundamental idea of the matching network is to map the image into an embedding space, which also encapsulates the label distribution, then use different architectures to project the test image into the same embedding space, and then use the cosine similarity to measure the similarity to achieve classification; [1] the difference between the prototype network and the matching network is the distance method, and a prototype representation is created for each classification, and the Euclidean distance between the prototype vector of the classification and the query point is used to determine; [20] use graph convolutional neural networks instead of simple convolutional neural networks to extract features; [21] propose that on the basis of metric learning, the method of adding fine-tuning when classifying can improve the classification effect. A simple and effective based on method, this work is based on metric-based methods.…”
Section: Related Work a Few-shot Image Classificationmentioning
confidence: 99%
“…The power of GNNs also extends beyond graph-structured data, as they have also been effectively utilized in nongraph structured data. This includes areas such as document classification [58], image classification [59], [60], person reidentification [61], [62], and action recognition [63], [64]. Since the GNNs are inherently limited by the graph structure that only allows for one-to-one relationships between vertices, some researchers have turned to hypergraphs [65] and Hypergraph Neural Networks (HGNNs) [66].…”
Section: Graph and Hypergraph Neural Networkmentioning
confidence: 99%
“…Image classification is divided into three main categories according to the level of granularity at which the categories are classified: cross-species semantic-level image classification, fine-grained image classification, and instance-level image classification. The finegrained image classification studied in this paper has been a hot topic in recent years and has a wide range of applications in industry, academia, and everyday life [1][2][3][4]. Fine-grained image classification refers to a more detailed sub-class division based on coarse-grained.…”
Section: Introductionmentioning
confidence: 99%