2021 IEEE International Conference on Multimedia &Amp; Expo Workshops (ICMEW) 2021
DOI: 10.1109/icmew53276.2021.9455950
|View full text |Cite
|
Sign up to set email alerts
|

Distribution Estimation Based Pseudo-Feature Library Generation For Few-Shot Image Classification

Abstract: Due to the high cost of labeled data acquisition, few-shot learning has attracted great attention in recent years. The biased estimation of class distribution from a few labeled samples hinders the model's performance. Some existing methods generate samples or features by a learning module or network. In this paper, a distribution-based pseudo-feature library generation method is proposed, and it follows a simple distribution modeling hypothesis. The base class features is introduced to better estimate the nov… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…The other generation-based methods (Yang et al 2021;Zhang et al 2021) transfer the information from the similar base classes to the few-shot classes. Yang et al (Yang et al 2021) propose a distribution calibration strategy to calibrate the biased feature distributions of few-shot classes to the corresponding ground-truth distributions.…”
Section: Related Workmentioning
confidence: 99%
“…The other generation-based methods (Yang et al 2021;Zhang et al 2021) transfer the information from the similar base classes to the few-shot classes. Yang et al (Yang et al 2021) propose a distribution calibration strategy to calibrate the biased feature distributions of few-shot classes to the corresponding ground-truth distributions.…”
Section: Related Workmentioning
confidence: 99%
“…Representation learning, on the other hand, aims at identifying feature transformations which can allow simple statistical techniques like nearest neighbor and bayesian classification generalize on few-shot tasks for novel classes [7,44]. Starting from early works [25] and building to recent methodologies [45,48], representation learning strategies have relied on simple statistical assumptions that may not hold across diverse datasets.…”
Section: Introductionmentioning
confidence: 99%