Proceedings of the 2nd Workshop on Deep Learning on Graphs for Natural Language Processing (DLG4NLP 2022) 2022
DOI: 10.18653/v1/2022.dlg4nlp-1.7
|View full text |Cite
|
Sign up to set email alerts
|

LiGCN: Label-interpretable Graph Convolutional Networks for Multi-label Text Classification

Abstract: Multi-label text classification (MLTC) is an attractive and challenging task in natural language processing (NLP). Compared with single-label text classification, MLTC has a wider range of applications in practice. In this paper, we propose a label-interpretable graph convolutional network model to solve the MLTC problem by modeling tokens and labels as nodes in a heterogeneous graph. In this way, we are able to take into account multiple relationships including token-level relationships. Besides, the model al… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 27 publications
0
1
0
Order By: Relevance
“…Transformers (Vaswani et al 2017) designed for sequential data have revolutionized the field of Natural Language Processing (NLP) (Liu et al 2019;Zhu et al 2020;Li et al 2022), and have recently made tremendous impact in graph learning (Yang et al 2021;Dwivedi and Bresson 2020) and computer vision (Dosovitskiy et al 2020;Huynh 2022). The self-attention used by regular Transformer models comes with a quadratic time and memory complexity O(n 2 ) for input sequence of length n, which prevents the application of Transformers to longer sequences in practical settings with limited computational resources.…”
Section: Introductionmentioning
confidence: 99%
“…Transformers (Vaswani et al 2017) designed for sequential data have revolutionized the field of Natural Language Processing (NLP) (Liu et al 2019;Zhu et al 2020;Li et al 2022), and have recently made tremendous impact in graph learning (Yang et al 2021;Dwivedi and Bresson 2020) and computer vision (Dosovitskiy et al 2020;Huynh 2022). The self-attention used by regular Transformer models comes with a quadratic time and memory complexity O(n 2 ) for input sequence of length n, which prevents the application of Transformers to longer sequences in practical settings with limited computational resources.…”
Section: Introductionmentioning
confidence: 99%