2017
DOI: 10.1007/978-3-319-71246-8_48
|View full text |Cite
|
Sign up to set email alerts
|

A Simple Exponential Family Framework for Zero-Shot Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
127
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 163 publications
(129 citation statements)
references
References 25 publications
2
127
0
Order By: Relevance
“…Observe that the proposed methods, Ours(S) and Ours(Π) consistently outperforms state-of-the-art methods in the GZSL setting. Specifically, the harmonic mean of the accuracy for seen (tr) and unseen (ts) classes with Ours(S) and [21,12,19,28]) only perform well on the seen classes and obtain close-to-zero accuracy on unseen classes, we are able to classify both seen and unseen improving upon existing works in the GZSL setting.…”
Section: Generalized Zero Shot Learning Evaluationmentioning
confidence: 79%
“…Observe that the proposed methods, Ours(S) and Ours(Π) consistently outperforms state-of-the-art methods in the GZSL setting. Specifically, the harmonic mean of the accuracy for seen (tr) and unseen (ts) classes with Ours(S) and [21,12,19,28]) only perform well on the seen classes and obtain close-to-zero accuracy on unseen classes, we are able to classify both seen and unseen improving upon existing works in the GZSL setting.…”
Section: Generalized Zero Shot Learning Evaluationmentioning
confidence: 79%
“…[26,16] exploit semantic manifold learning. GFZSL [52] treats unknown labels of unseen class images as latent variables and applies Expectation-Maximization (EM). As the prediction is biased to seen classes in GZSL, UE [51] maximizes the probability of predicting unlabeled images as unseen classes.…”
Section: Related Workmentioning
confidence: 99%
“…It nevertheless uses a single projection matrix to project visual features into the semantic space. More recently, the authors of [30] proposed to learn generative models to predict data distribution of seen and unseen classes from their attribute vectors, and used unlabeled test data to refine the distribution parameters of target classes. The work in [28] trains an end-to-end network that optimizes the loss on both seen class data and unseen test data, by minimizing the Quasi-Fully Supervised Learning loss, which uses target class data to reduce seen/unseen bias of the model during training.…”
Section: Transductive Zero-shot Learningmentioning
confidence: 99%