2020
DOI: 10.1609/aaai.v34i04.5923
|View full text |Cite
|
Sign up to set email alerts
|

Attribute Propagation Network for Graph Zero-Shot Learning

Abstract: The goal of zero-shot learning (ZSL) is to train a model to classify samples of classes that were not seen during training. To address this challenging task, most ZSL methods relate unseen test classes to seen(training) classes via a pre-defined set of attributes that can describe all classes in the same semantic space, so the knowledge learned on the training classes can be adapted to unseen classes. In this paper, we aim to optimize the attribute space for ZSL by training a propagation mechanism to refine th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
44
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 69 publications
(44 citation statements)
references
References 45 publications
0
44
0
Order By: Relevance
“…The proposed two HarS based inductive methods, HarS-WGAN and HarS-VAEGAN are compared with 18 stateof-the-art inductive methods: DEVISE [4], LATEM [17], SAE [49], DEM [22], LiGAN [28], ABP [30], TCN [24], OCD-GZSL [50], APNet [51], DE-VAE [52], LsrGAN [53], f-CSLWGAN [11], DASCN [27], DVBE [9], DAZLE [54], TF-VAEGAN [13], CF-GZSL [19], GCM-CF [37]. The proposed two HarST based transductive methods, HarST-DEM and HarST-WGAN are compared with 14 state-ofthe-art transductive methods: ALE-tran [16], GFZSL [55], DSRL [56], QFSL [40], GMN [57], f-VAEGAN-D2 [44], GXE [42], SABR-T [29], PREN [41], VSC [58], DTN [59], ADA [60], SDGN [45], TF-VAEGAN [13].…”
Section: Datasets and Comparative Methodsmentioning
confidence: 99%
“…The proposed two HarS based inductive methods, HarS-WGAN and HarS-VAEGAN are compared with 18 stateof-the-art inductive methods: DEVISE [4], LATEM [17], SAE [49], DEM [22], LiGAN [28], ABP [30], TCN [24], OCD-GZSL [50], APNet [51], DE-VAE [52], LsrGAN [53], f-CSLWGAN [11], DASCN [27], DVBE [9], DAZLE [54], TF-VAEGAN [13], CF-GZSL [19], GCM-CF [37]. The proposed two HarST based transductive methods, HarST-DEM and HarST-WGAN are compared with 14 state-ofthe-art transductive methods: ALE-tran [16], GFZSL [55], DSRL [56], QFSL [40], GMN [57], f-VAEGAN-D2 [44], GXE [42], SABR-T [29], PREN [41], VSC [58], DTN [59], ADA [60], SDGN [45], TF-VAEGAN [13].…”
Section: Datasets and Comparative Methodsmentioning
confidence: 99%
“…Also, other applications like pose estimation [35] and segmentation [36] or other image sources like videos or sketches [37] are excluded. Topics like few-shot or zero-shot learning methods such as [38] are excluded in this survey. However, we will see in subsection IV-D that topics like few-shot learning and semi-supervised can learn from each other in the future like in [39].…”
Section: A Related Workmentioning
confidence: 99%
“…We conclude that on datasets with few and a fixed number of classes semi-supervised methods will be more important than unsupervised methods. However, if we have a lot of classes or new classes should be detected like in few-or zeroshot learning [38], [94], [99], [100] unsupervised methods will still have a lower labeling cost and be of high importance. This means future research has to investigate how the semi-supervised ideas can be transferred to unsupervised methods as in [14], [41] and to settings with many, an unknown or rising amount of classes like in [39], [96].…”
Section: Trend: How Much Supervision Is Needed?mentioning
confidence: 99%
“…In this way, the student model gets feedback from its teacher and retrains with the high-quality data generated by itself. Dong et al [46] proposed an Isometric Propagation Network (IPN) method, which learned to generate the vision feature with semantic information for unlabeled data/unseen classes.…”
Section: Quality Of Pseudo Labelsmentioning
confidence: 99%