2023
DOI: 10.1016/j.image.2022.116897
|View full text |Cite
|
Sign up to set email alerts
|

Center-VAE with discriminative and semantic-relevant fine-tuning features for generalized zero-shot learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 6 publications
0
2
0
Order By: Relevance
“…Generating synthesized visual features alleviates the classification bias to a certain extent, but there still exist gaps among synthesized features and high-quality features, such as Redundancy-Free Feature-based Generalized Zero-Shot Learning (RFF-GZSL) [11], TF-VAEGAN [26], SE-GZSL [18], and FREE [3]. Besides, in comparison with the recent TDCSS [6], CMPN [10], DFTN [17], CvDSF [40], CMC-GAN [38], and SALN [39], Co-GZSL still achieves excellent improvements on the performance. Moreover, although the results of CE-GZSL [12] are close to our model, it exists a gap between them.…”
Section: Comparison With State-of-the-arts (Sotas)mentioning
confidence: 99%
See 1 more Smart Citation
“…Generating synthesized visual features alleviates the classification bias to a certain extent, but there still exist gaps among synthesized features and high-quality features, such as Redundancy-Free Feature-based Generalized Zero-Shot Learning (RFF-GZSL) [11], TF-VAEGAN [26], SE-GZSL [18], and FREE [3]. Besides, in comparison with the recent TDCSS [6], CMPN [10], DFTN [17], CvDSF [40], CMC-GAN [38], and SALN [39], Co-GZSL still achieves excellent improvements on the performance. Moreover, although the results of CE-GZSL [12] are close to our model, it exists a gap between them.…”
Section: Comparison With State-of-the-arts (Sotas)mentioning
confidence: 99%
“…However, these methods tend to recognize unseen classes as seen classes in the context of lacking training samples on unseen classes in GZSL tasks, which results in low accuracy during testing. Therefore, recent works propose generativebased models, such as the Generative Dual Adversarial Network (GDAN) [16], the GZSL via Synthesized Examples (SE-GZSL) [32], Inference guided Feature Generation (Inf-FG) [13], Cross-Modal ConsistencyGAN (CMC-GAN) [38], a Generative Adversarial Network (GAN) named f-CLSWGAN [36], center-VAE with discriminative and semantic-relevant fine-tuning features for generalized zero-shot learning (CvDSF) [40], and Dual-Focus Transfer Network (DFTN) [17], to synthesize the sufficient number of samples for the unseen classes. Semantic features are usually mapped as synthesized visual features in generative models to tackle the problem that the unseen classes lack enough training samples.…”
Section: Introductionmentioning
confidence: 99%