2020
DOI: 10.1109/access.2020.3017382
|View full text |Cite
|
Sign up to set email alerts
|

A Hybrid BERT Model That Incorporates Label Semantics via Adjustive Attention for Multi-Label Text Classification

Abstract: The multi-label text classification task aims to tag a document with a series of labels. Previous studies usually treated labels as symbols without semantics and ignored the relation among labels, which caused information loss. In this paper, we show that explicitly modeling label semantics can improve multilabel text classification. We propose a hybrid neural network model to simultaneously take advantage of both label semantics and fine-grained text information. Specifically, we utilize the pre-trained BERT … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
23
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 67 publications
(25 citation statements)
references
References 28 publications
1
23
0
1
Order By: Relevance
“…(2018) proposed BERT, an innovative deep neural network model that includes the pre‐training of deep bidirectional representations from unlabelled text and fine‐tuning with one additional output layer. BERT has achieved the best performance in text classification (Cai et al., 2020; Dong et al., 2020). Thus, this study adopted BERT to automatically classify online discussion transcripts into three interaction types: cognitive, metacognitive and off‐topic.…”
Section: Methodsmentioning
confidence: 99%
“…(2018) proposed BERT, an innovative deep neural network model that includes the pre‐training of deep bidirectional representations from unlabelled text and fine‐tuning with one additional output layer. BERT has achieved the best performance in text classification (Cai et al., 2020; Dong et al., 2020). Thus, this study adopted BERT to automatically classify online discussion transcripts into three interaction types: cognitive, metacognitive and off‐topic.…”
Section: Methodsmentioning
confidence: 99%
“…A hybrid BERT model was proposed for multi-label text classification [19]. This model works based on four sub-task; first context-aware representation was developed using the word embedding technique (pre-trained BERT).…”
Section: Indonesian J Elec Eng and Comp Sci Issn: 2502-4752 mentioning
confidence: 99%
“…GCN collects the values of all neighboring nodes to evaluate the current node. ReLU activation function used here [19], [27].…”
mentioning
confidence: 99%
“…DUTSD has the best performance among the three separate dictionaries of sentiment. BERT representation technique is used for ABSA [13,14]. Many previous methodologies treated labels as symbols without semantics and ignored the relation among labels, which caused information loss.…”
Section: Related Workmentioning
confidence: 99%
“…Many previous methodologies treated labels as symbols without semantics and ignored the relation among labels, which caused information loss. This problem is handled by Cai et al [13]. In this approach, the hybrid BERT model incorporates label semantics via ajustive attention, which searches and identifies semantic dependencies of label space and text space simultaneously.…”
Section: Related Workmentioning
confidence: 99%