2020
DOI: 10.48550/arxiv.2010.07459
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Multi-label Few/Zero-shot Learning with Knowledge Aggregated from Multiple Label Graphs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 35 publications
0
3
0
Order By: Relevance
“…Chalkidis et al (2020Chalkidis et al ( , 2019; Xie et al (2019) adopted Label-Wise Attention Networks to encourage interactions between text and labels. Rios and Kavuluru (2018); Lu et al (2020) used Graph Neural Networks to capture the structural information in the label hierarchy. However, few existing works investigate the effectiveness of pretrained models on the ZS-MTC task, despite pretrained models being effective as matching models for many natural language processing tasks (Ma et al, 2019;Qiao et al, 2019;Nogueira et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Chalkidis et al (2020Chalkidis et al ( , 2019; Xie et al (2019) adopted Label-Wise Attention Networks to encourage interactions between text and labels. Rios and Kavuluru (2018); Lu et al (2020) used Graph Neural Networks to capture the structural information in the label hierarchy. However, few existing works investigate the effectiveness of pretrained models on the ZS-MTC task, despite pretrained models being effective as matching models for many natural language processing tasks (Ma et al, 2019;Qiao et al, 2019;Nogueira et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…The existing zero-shot learning for multi-label text classification (ZS-MTC) mostly learns a matching model between the feature space of text and the label space (Ye et al, 2020). In order to learn effective representations for labels, a majority of existing work incorporates label hierarchies via a label encoder designed as Graph Neural Networks (GNNs) that can aggregate the neighboring information for labels (Chalkidis et al, 2020;Lu et al, 2020).…”
Section: Introductionmentioning
confidence: 99%
“…This is because auto-tagging has a potentially very large label space, ranging from subject topics to knowledge components (KC) (Zhang et al, 2015;Koedinger et al, 2012;Mohania et al, 2021;Viswanathan et al, 2022). The resulting data scarcity decreases performance on rare labels during training (Chalkidis et al, 2020;Lu et al, 2020;Snell et al, 2017;Choi et al, 2022).…”
Section: Introductionmentioning
confidence: 99%