2021
DOI: 10.48550/arxiv.2101.03292
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Entropy-Based Uncertainty Calibration for Generalized Zero-Shot Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 67 publications
0
4
0
Order By: Relevance
“…Semantic Rectifying GAN (SRGAN) [22] utilizes manually designed distance functions to rectify over-smoothing semantic features by visual similarities. Some embedding methods [10] and VAE-based methods [27], [28] try to utilize the triplet loss to search automatically more discriminative representations from visual features.…”
Section: A Zero-shot Learningmentioning
confidence: 99%
See 3 more Smart Citations
“…Semantic Rectifying GAN (SRGAN) [22] utilizes manually designed distance functions to rectify over-smoothing semantic features by visual similarities. Some embedding methods [10] and VAE-based methods [27], [28] try to utilize the triplet loss to search automatically more discriminative representations from visual features.…”
Section: A Zero-shot Learningmentioning
confidence: 99%
“…For example, one embedding ZSL method, Latent Discriminative Features Learning (LDF) [10], utilizes TL to mine new latent semantic features from visual features. In generative methods, Entropybased Uncertainty calibration VAE (EUC-VAE) [27] and Over-Complete Distribution VAE (OCD-VAE) [28] integrate TL in VAE to enhance the separability of encoded representations. EUC-VAE designs two TLs trained by visual features and semantic features, respectively.…”
Section: B Triplet Loss In Zslmentioning
confidence: 99%
See 2 more Smart Citations