2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.01194
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Confidence Smoothing for Generalized Zero-Shot Learning

Abstract: Generalized zero-shot learning (GZSL) is the problem of learning a classifier where some classes have samples and others are learned from side information, like semantic attributes or text description, in a zero-shot learning fashion (ZSL). Training a single model that operates in these two regimes simultaneously is challenging. Here we describe a probabilistic approach that breaks the model into three modular components, and then combines them in a consistent way. Specifically, our model consists of three cla… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
95
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 105 publications
(95 citation statements)
references
References 41 publications
0
95
0
Order By: Relevance
“…To this end, our motivations and formulations focus on the GZSL settings. Previous methods which focus on GZSL, e.g., CADA-VAE [37], PREN [43], COSOMO [4] and GDAN [19], did not report the performance on ZSL evaluations. Therefore, we do not conduct extensive evaluations on ZSL but only report the performance on the most challenging dataset used in this paper, i.e., SUN, to show that our model is also effective for ZSL.…”
Section: Results Of Zslmentioning
confidence: 85%
See 1 more Smart Citation
“…To this end, our motivations and formulations focus on the GZSL settings. Previous methods which focus on GZSL, e.g., CADA-VAE [37], PREN [43], COSOMO [4] and GDAN [19], did not report the performance on ZSL evaluations. Therefore, we do not conduct extensive evaluations on ZSL but only report the performance on the most challenging dataset used in this paper, i.e., SUN, to show that our model is also effective for ZSL.…”
Section: Results Of Zslmentioning
confidence: 85%
“…To verify our proposed method, we compare it with both embedding methods: DeViSE [15], ESZSL [36], ALE [1], SAE [24], SJE [2], DEM [44]; and generative methods: f-CLSWGAN [41], GAZSL [45], cyc-CLSWGAN [14], SE [39], LisGAN [28], CADA-VAE [37], PREN [43] and COSMO [4]. The results of the compared methods are cited from the original papers 1 and the recent survey paper [42].…”
Section: Awa1mentioning
confidence: 99%
“…These results are given for the data sets CUB, SUN, AWA1 and AWA2. We compare our approach with 12 leading GZSL methods, which are divided into three groups: semantic (SJE [24], ALE [25], LATEM [26], ES-ZSL [27], SYNC [12], DEVISE [2]), latent space learning (SAE [15], f-CLSWGAN [11], cycle-WGAN [3] and CADA-VAE [4]) and domain classification (CMT [6] and DAZSL [5]). The semantic group contains methods that only use the seen class visual and semantic samples to learn a transformation function from the visual to the semantic space, and classification is based on nearest neighbour classification in that semantic space.…”
Section: 4resultsmentioning
confidence: 99%
“…Our second observation is that samples from unseen classes that are visually different from any of the seen classes, tend to be projected outside the distribution of seen classes [6]. Atzmon and Chechik [5] propose a general framework that combines domain expert classifiers, such as DAP [7] for unseen classes, and LAGO for the seen classes [5]. However, this method relies on the disjoint training of both experts models, and the assumption that unseen samples are projected outside the distribution of seen classes [6].…”
Section: Introductionmentioning
confidence: 99%
“…However, direct search within all classes cannot well utilized the knowledge learned from the seen training samples, thus some Out-of-Domain (OoD) based methods are proposed to first classify the feature as seen or unseen, and then divide the GZSL problem into two sub-tasks: a conventional ZSL task and a fully supervised learning task. For example, some OoD [7,8] based methods define two classifiers to handle the seen and unseen domains separately. However, they all neglected that OoD detection is also a binary zeroshot classification, so it is unsuitable to use two totally different models for OoD detection and zero-shot classification respectively.…”
Section: Introductionmentioning
confidence: 99%