2023
DOI: 10.1109/tmm.2022.3222657
|View full text |Cite
|
Sign up to set email alerts
|

Towards Unbiased Multi-Label Zero-Shot Learning With Pyramid and Semantic Attention

Abstract: This paper investigates a challenging problem of zero-shot learning in the multi-label scenario (MLZSL), wherein, the model is trained to recognize multiple unseen classes within a sample (e.g., an image) based on seen classes and auxiliary knowledge, e.g., semantic information. Existing methods usually resort to analyzing the relationship of various seen classes residing in a sample from the dimension of spatial or semantic characteristics, and transfer the learned model to unseen ones. But they ignore the ef… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 67 publications
0
1
0
Order By: Relevance
“…Compositional Zero-Shot Learning. CZSL [9,20,25,26,28] is a task similar to how humans can imagine and discriminate unseen concepts according to the concepts they have learned, which is a significant branch of ZSL [5,10,11,15,17,21,22,40].…”
Section: Related Workmentioning
confidence: 99%
“…Compositional Zero-Shot Learning. CZSL [9,20,25,26,28] is a task similar to how humans can imagine and discriminate unseen concepts according to the concepts they have learned, which is a significant branch of ZSL [5,10,11,15,17,21,22,40].…”
Section: Related Workmentioning
confidence: 99%
“…Recent years have seen a rise of interest in zero-shot learning (ZSL) which imitates the human ability to recognize unseen classes without observing real samples (Kodirov, Xiang, and Gong 2017;Yu et al 2018;Zhu et al 2019;Chen et al 2021c;Liu et al 2022;Kim, Shim, and Shim 2022;Su et al 2022;Khan et al 2023). Specifically, ZSL takes utilization of seen classes with labeled samples and auxiliary knowledge between seen and unseen classes to achieve recognition.…”
Section: Introductionmentioning
confidence: 99%