2021
DOI: 10.1109/access.2021.3065904
|View full text |Cite
|
Sign up to set email alerts
|

Feature Transformation Network for Few-Shot Learning

Abstract: Few-shot learning researches to learn a novel concept from a handful of labeled samples. Due to the small amount of training data, deep network has the risk of over-fitting. Although many previous approaches based on metric criterion can make significant progress to tackle this challenge, they not only ignore the association between query set and support set when learning sample representation, but also fail to focus greater attention in the target area. To cope with these issues, we propose a novel feature tr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…Attention mechanism classification methods include CAN [13] , SEGA [15] , CTM [16] , MetaOpt [14] , TPNet [11] , MATANet [12] . Local feature classification methods include MLFRNet [17] .…”
Section: Resultsmentioning
confidence: 99%
“…Attention mechanism classification methods include CAN [13] , SEGA [15] , CTM [16] , MetaOpt [14] , TPNet [11] , MATANet [12] . Local feature classification methods include MLFRNet [17] .…”
Section: Resultsmentioning
confidence: 99%
“…Recently, few-shot learning has emerged as a promising approach to address data scarcity challenges in the medical field. Few-shot learning models excel at identifying new categories without the need for retraining, making them particularly suited for tasks where labeled data are limited or expensive to collect [ 13 ]. Among the various techniques for few-shot learning, one notable method is Model-Agnostic Meta-Learning (MAML) [ 14 ].…”
Section: Introductionmentioning
confidence: 99%
“…This paper proposes a classification network of clean and degraded images based on multi-task learning and consistency regularization with the cosine similarity loss [25], [26]. Our proposed method makes a network learn the classification and degradation levels for degraded images as multi-task learning.…”
Section: Introductionmentioning
confidence: 99%