Proceedings of the Third Workshop on Abusive Language Online 2019
DOI: 10.18653/v1/w19-3515
|View full text |Cite
|
Sign up to set email alerts
|

Neural Word Decomposition Models for Abusive Language Detection

Abstract: User generated text on social media often suffers from a lot of undesired characteristics including hatespeech, abusive language, insults etc. that are targeted to attack or abuse a specific group of people. Often such text is written differently compared to traditional text such as news involving either explicit mention of abusive words, obfuscated words and typological errors or implicit abuse i.e., indicating or targeting negative stereotypes. Thus, processing this text poses several robustness challenges w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(22 citation statements)
references
References 29 publications
0
22
0
Order By: Relevance
“…For Wikipedia, Bodapati et al (2019) report the weighted F1 of 95.7 as the state-of-the-art results. However, when we re-implement their model, we achieve a slightly better weighted F1 of 95.9 as what we report in Table 5.…”
Section: Classification Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…For Wikipedia, Bodapati et al (2019) report the weighted F1 of 95.7 as the state-of-the-art results. However, when we re-implement their model, we achieve a slightly better weighted F1 of 95.9 as what we report in Table 5.…”
Section: Classification Resultsmentioning
confidence: 99%
“…This model includes an ensemble of several machine learning classifier with word n-grams and character n-grams lexical features. 8 Bodapati'19 (Bodapati et al, 2019): This work reported the state-of-the-art results on the Wikipedia dataset. The authors added a single dense layer on top of BERT to fine-tune it for the task of abusive language detection.…”
Section: Baselines and Sota Approachesmentioning
confidence: 99%
“…Several ideas have been proposed in the literature to improve the performance of BERT for abusive language detection. For example, finetuning large pre-trained language models in (Bodapati et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…Long Short Term Memory (LSTM) model [13] or Gated Recurrent Unit (GRU) model [14] capture the long range dependencies. Such properties are important for modelling toxic speech [7], [15].…”
Section: She Looks Like a Plastic Monkey Doll!mentioning
confidence: 99%