2020
DOI: 10.1007/978-981-15-2740-1_17
|View full text |Cite
|
Sign up to set email alerts
|

Detection of Hate Speech and Offensive Language in Twitter Data Using LSTM Model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
25
0
2

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 50 publications
(27 citation statements)
references
References 20 publications
0
25
0
2
Order By: Relevance
“…Agarwal, et al [13] proposed 4 different variants of RNN (GRNN, LRNN, GLRNN and UGRNN) to perform multimodal sentiment analysis including text, video, and audio. Bisht, et al [14] attained an accuracy of 86% by performing hate speech classification on a three-layer LSTM/Bi-LSTM model.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Agarwal, et al [13] proposed 4 different variants of RNN (GRNN, LRNN, GLRNN and UGRNN) to perform multimodal sentiment analysis including text, video, and audio. Bisht, et al [14] attained an accuracy of 86% by performing hate speech classification on a three-layer LSTM/Bi-LSTM model.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, text classification models have been shifting towards deep learning models, such as CNN [4,11,12], RNN [13], GRU [10], LSTM and BiLSTM [14]. The Convolutional Neural Network (CNN) method is generally used in digital image processing but this does not rule out the possibility to use it in text processing [11].…”
Section: Introductionmentioning
confidence: 99%
“…In other study, Bisht et al (2020) developed a LSTM model for the detection of hate speech and offensive language in Twitter data. The developed LSTM model include a single LSTM layer consisting set of input layers that gets the consecutive input.…”
Section: Related Work 21 Literature Reviewmentioning
confidence: 99%
“…Besides the social media platforms themselves, different studies have also made important attempts to identify abusive behaviors. For example, several studies has emerged to detect different form of abusive behaviors (Chen et al, 2012;Davidson et al, 2017;Nobata et al, 2016) such as hate speech (Siddiqua et al, 2019;Badjatiya et al, 2017;Djuric et al, 2015;Warner and Hirschberg, 2012;Waseem and Hovy, 2016), offensive language (Bisht et al, 2020;Mehdad and Tetreault, 2016;Xiang et al, 2012), sexist and racist language (Jha and Mamidi, 2017;Lozano et al, 2017), homophobia (Sanguinetti et al, 2018), cyberbullying (Balakrishnan et al, 2020;Chatzakou et al, 2017;Dinakar et al, 2011;Riadi, 2017), harassment and aggression (Bugueño and Mendoza, 2019;Espinoza and Weiss, 2019;Kim et al, 2020). Each form of these abusive behaviors in the different studies has its motive depending on the topics and individuals involved.…”
Section: Introductionmentioning
confidence: 99%
“…The detection of offensive language, cyberbullying and hate speech are tasks that are closely connected and often confused (Malmasi and Zampieri, 2018). Several machine learning models addressing hate speech or offensive language detection have been proposed in the last years; in particular deep learning models (Gambäck and Sikdar, 2017;Park and Fung, 2017;Badjatiya et al, 2017;Agrawal and Awekar, 2018;Bisht et al, 2020;Gertner et al, 2019;Pérez and Luque, 2019) have increased their popularity among researchers on this task. Despite the growing interest in the area, the models are usually trained and evaluated inside very specific English datasets, and their generalizability to other contexts or languages is still a challenge.…”
Section: Introductionmentioning
confidence: 99%