2020
DOI: 10.1007/978-3-030-58577-8_5
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging Seen and Unseen Semantic Relationships for Generative Zero-Shot Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
46
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 107 publications
(46 citation statements)
references
References 35 publications
0
46
0
Order By: Relevance
“…The proposed two HarS based inductive methods, HarS-WGAN and HarS-VAEGAN are compared with 18 stateof-the-art inductive methods: DEVISE [4], LATEM [17], SAE [49], DEM [22], LiGAN [28], ABP [30], TCN [24], OCD-GZSL [50], APNet [51], DE-VAE [52], LsrGAN [53], f-CSLWGAN [11], DASCN [27], DVBE [9], DAZLE [54], TF-VAEGAN [13], CF-GZSL [19], GCM-CF [37]. The proposed two HarST based transductive methods, HarST-DEM and HarST-WGAN are compared with 14 state-ofthe-art transductive methods: ALE-tran [16], GFZSL [55], DSRL [56], QFSL [40], GMN [57], f-VAEGAN-D2 [44], GXE [42], SABR-T [29], PREN [41], VSC [58], DTN [59], ADA [60], SDGN [45], TF-VAEGAN [13].…”
Section: Datasets and Comparative Methodsmentioning
confidence: 99%
“…The proposed two HarS based inductive methods, HarS-WGAN and HarS-VAEGAN are compared with 18 stateof-the-art inductive methods: DEVISE [4], LATEM [17], SAE [49], DEM [22], LiGAN [28], ABP [30], TCN [24], OCD-GZSL [50], APNet [51], DE-VAE [52], LsrGAN [53], f-CSLWGAN [11], DASCN [27], DVBE [9], DAZLE [54], TF-VAEGAN [13], CF-GZSL [19], GCM-CF [37]. The proposed two HarST based transductive methods, HarST-DEM and HarST-WGAN are compared with 14 state-ofthe-art transductive methods: ALE-tran [16], GFZSL [55], DSRL [56], QFSL [40], GMN [57], f-VAEGAN-D2 [44], GXE [42], SABR-T [29], PREN [41], VSC [58], DTN [59], ADA [60], SDGN [45], TF-VAEGAN [13].…”
Section: Datasets and Comparative Methodsmentioning
confidence: 99%
“…There are some knowledge graph embedding algorithms based on GAN and Zero-Shot Learning [157]. Vyas et al [158] proposed a Generalized Zero-Shot learning model, which can find unseen semantic in knowledge graphs. 5) Graph Spatial-Temporal Networks: Graph spatialtemporal networks simultaneously capture the spatial and temporal dependence of graphs.…”
Section: ) Graph Generative Networkmentioning
confidence: 99%
“…In addition, they deploy the average representation of all samples from an unseen class defining the soul sample of the class to reduce the noise in the predictions. Vyas et al [141] propose LsrGAN, a generative model that leverages the semantic relationship between seen and unseen categories and explicitly performs knowledge transfer by incorporating a novel semantic regularized loss (SR-Loss). Knowing the inter-class relationships in the semantic space helps to impose the same relationship constraints among the generated visual features.…”
Section: Semantic-visual Features Extractorsmentioning
confidence: 99%