Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.394
|View full text |Cite
|
Sign up to set email alerts
|

Joint Modelling of Emotion and Abusive Language Detection

Abstract: The rise of online communication platforms has been accompanied by some undesirable effects, such as the proliferation of aggressive and abusive behaviour online. Aiming to tackle this problem, the natural language processing (NLP) community has experimented with a range of techniques for abuse detection. While achieving substantial success, these methods have so far only focused on modelling the linguistic properties of the comments and the online communities of users, disregarding the emotional state of the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
40
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 56 publications
(40 citation statements)
references
References 29 publications
0
40
0
Order By: Relevance
“…We first adapt three multi-task learning models based on bidirectional recurrent neural networks recently proposed by Rajamanickam et al (2020) for jointly modeling abusive language detection and emotion detection. We also adapt our M-RoBERTa Emo model in a multi-task setting using two variants.…”
Section: Predictive Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…We first adapt three multi-task learning models based on bidirectional recurrent neural networks recently proposed by Rajamanickam et al (2020) for jointly modeling abusive language detection and emotion detection. We also adapt our M-RoBERTa Emo model in a multi-task setting using two variants.…”
Section: Predictive Modelsmentioning
confidence: 99%
“…We adapt the MTL-Hard Sharing model of Rajamanickam et al (2020), where a single encoder is shared and updated by both tasks. We first pass GloVe embedding representations to a shared stacked BiGRU encoder.…”
Section: Mtl-hard Sharingmentioning
confidence: 99%
“…Liu et al (2017) used shared and private latent features leveraging multitask learning for different text classification tasks. Rajamanickam et al (2020); Duong et al (2016); Liu et al (2016) proposed a joint framework for modeling abuse and emotion detection and showed improvements over STL and transfer learning. Akhtar et al (2018) proposed a multitask ensemble architecture for jointly modeling emotion, sentiment, and intensity, which gave improvements over single-label classification.…”
Section: Related Workmentioning
confidence: 99%
“…Such an approach to aggregate information flow from two encoders has facilitated success in prior Multitask learning settings as well (Rajamanickam et al, 2020;Dankers et al, 2019). As for our auxiliary task, we pass the embeddings e (a) through only the shared encoder (h (a) = h (s) ), followed by a dropout layer.…”
Section: Multitask Learningmentioning
confidence: 99%
“…Several research papers have proven that emotion lexicons are helpful features for the tasks of abusive language and hate-speech detection (Koufakou and Scott, 2020;Wiegand et al, 2018;Martins et al, 2018;Corazza et al, 2018;Alorainy et al, 2018;Gao and Huang, 2017). There is also one study that shows jointly modeling of emotion classification and abuse detection, through a multitask approach, can improve the performance of the latter task (Rajamanickam et al, 2020).…”
Section: Related Workmentioning
confidence: 99%