2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00111
|View full text |Cite
|
Sign up to set email alerts
|

A Generative Adversarial Approach for Zero-Shot Learning from Noisy Texts

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
429
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 385 publications
(432 citation statements)
references
References 31 publications
2
429
1
Order By: Relevance
“…However, the accuracy of unseen samples would dramatically drop in GZSL. As we mentioned before, the unseen samples accuracy of GAZSL [37] drops from 68.2% (ZSL) to 19.2% (GZSL). To the best of our knowledge, there is no previous work explicitly addressing the feature confusion issue in generative ZSL.…”
Section: Generative Zero-shot Learningmentioning
confidence: 65%
See 4 more Smart Citations
“…However, the accuracy of unseen samples would dramatically drop in GZSL. As we mentioned before, the unseen samples accuracy of GAZSL [37] drops from 68.2% (ZSL) to 19.2% (GZSL). To the best of our knowledge, there is no previous work explicitly addressing the feature confusion issue in generative ZSL.…”
Section: Generative Zero-shot Learningmentioning
confidence: 65%
“…Thus, ZSL tasks involve a shared semantic space and a visual space where seen samples and unseen ones have distinctive data distributions. According to the working mechanisms, existing ZSL methods can be grouped into either embedding methods [2,6,33,36] or generative methods [17,29,31,37]. Specifically, embedding methods learn a visual-to-semantic embedding space, or a semantic-to-visual embedding space, or a shared intermediate embedding space where the two domains are connected.…”
Section: Related Work 21 Zero-shot Learningmentioning
confidence: 99%
See 3 more Smart Citations