2021
DOI: 10.1016/j.neucom.2021.09.016
|View full text |Cite
|
Sign up to set email alerts
|

Fine-grained few shot learning with foreground object transformation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 16 publications
(10 citation statements)
references
References 14 publications
1
9
0
Order By: Relevance
“…A few generic few-shot learning literatures explored the performance of their algorithms on FSFG settings [11,23,24]. However, the nature of fine-grained domain is hardly addressed except for some recently published specialized approaches [46,47,48,49,50,51].…”
Section: Few-shot Fine-grained Visual Categorizationmentioning
confidence: 99%
See 2 more Smart Citations
“…A few generic few-shot learning literatures explored the performance of their algorithms on FSFG settings [11,23,24]. However, the nature of fine-grained domain is hardly addressed except for some recently published specialized approaches [46,47,48,49,50,51].…”
Section: Few-shot Fine-grained Visual Categorizationmentioning
confidence: 99%
“…Deep data-augmentation based method have also been proposed in FSFG [50] which achieve significant improvement in generalization performance, but with exorbitant computation cost. And in theory, augmentation-based techniques can work in tandem with other techniques including ours but the engineering required to bind them together is not explored here.…”
Section: Few-shot Fine-grained Visual Categorizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Then they further employed a new low-rank bilinear pool operation [14], and designed a feature alignment layer to match the support features with the query features. Meanwhile, FOT [25] is a data enhancement method, which uses the posture transformation generator to generate additional samples of new sub-categories. Zhu et al [12] proposed a multi-attention meta learning method that exploited the attention mechanism of base learner and task learner to capture the different parts of an image.…”
Section: The Related Workmentioning
confidence: 99%
“…In contrast, humans can identify new classes with only few labeled examples. Recently, some studies [25,31] focus on a more challenging setting, which aims to recognize fine-grained images from few samples, and is called fine-grained few-shot learning(FG-FSL). Learning from fine-grained images with few samples brings two challenges.…”
Section: Introductionmentioning
confidence: 99%