2021
DOI: 10.48550/arxiv.2104.01666
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Improving Pretrained Models for Zero-shot Multi-label Text Classification through Reinforced Label Hierarchy Reasoning

Abstract: Exploiting label hierarchies has become a promising approach to tackling the zero-shot multi-label text classification (ZS-MTC) problem. Conventional methods aim to learn a matching model between text and labels, using a graph encoder to incorporate label hierarchies to obtain effective label representations (Rios and Kavuluru, 2018). More recently, pretrained models like BERT (Devlin et al., 2018) have been used to convert classification tasks into a textual entailment task (Yin et al., 2019). This approach… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 28 publications
0
3
0
Order By: Relevance
“…Additionally, in the beginning of 2021 a number of hierarchical label-based attention models were published like HLAN (Dong et al, 2021), LA-HCN (Zhang et al, 2020), and (Liu et al, 2021b). Some papers also worth mentioning are Meng et al (2018); Xiao et al (2019) and Yin et al (2019).…”
Section: Literature On Topical Text Classificationmentioning
confidence: 99%
“…Additionally, in the beginning of 2021 a number of hierarchical label-based attention models were published like HLAN (Dong et al, 2021), LA-HCN (Zhang et al, 2020), and (Liu et al, 2021b). Some papers also worth mentioning are Meng et al (2018); Xiao et al (2019) and Yin et al (2019).…”
Section: Literature On Topical Text Classificationmentioning
confidence: 99%
“…Typically, the MRC pipeline works in two phases, where a passage retriever is followed by a passage reader (Chen et al, 2017). For a given question, the retriever first extracts a set of relevant passages from a knowledge base (i.e., text corpus), and then the reader selects an answer (e.g., text span) from one of the retrieved passages (Zhu et al, 2021).…”
Section: Introductionmentioning
confidence: 99%
“…Most works in AL have focused on developing strategies for single-label text classification (Tong and Koller, 2001;Hoi et al, 2006), Named Entity Recognition (Tomanek and Hahn, 2009;Shen et al, 2004Shen et al, , 2017 and Neural Machine Translation (Zhang et al, 2018;Peris and Casacuberta, 2018;. More recently, multi-label text classification (Liu et al, 2017;Pant et al, 2019;Liu et al, 2021) has received considerable attention since many text classification tasks are multi-labeled, i.e., each document can belong to more than one category. Take news classification as an example, a news article talking about the effect of the Olympic games on the tourism industry might belong to the following topic categories: sports, economy and travel.…”
Section: Introductionmentioning
confidence: 99%