Proceedings of the 10th ACM Conference on Web Science 2019
DOI: 10.1145/3292522.3326028
|View full text |Cite
|
Sign up to set email alerts
|

A Unified Deep Learning Architecture for Abuse Detection

Abstract: Hate speech, offensive language, sexism, racism and other types of abusive behavior have become a common phenomenon in many online social media platforms. In recent years, such diverse abusive behaviors have been manifesting with increased frequency and levels of intensity. This is due to the openness and willingness of popular media platforms, such as Twitter and Facebook, to host content of sensitive or controversial topics. However, these platforms have not adequately addressed the problem of online abusive… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
101
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 162 publications
(101 citation statements)
references
References 33 publications
0
101
0
Order By: Relevance
“…Waseem et al [23] brought a new insight to hate speech and abusive language detection tasks by proposing a multi-task learning framework to deal with datasets across different annotation schemes, labels, or geographic and cultural influences from data sampling. Founta et al [7] built a unified classification model that can efficiently handle different types of abusive language such as cyberbullying, hate, sarcasm, etc. using raw text and domain-specific metadata from Twitter.…”
Section: Previous Workmentioning
confidence: 99%
“…Waseem et al [23] brought a new insight to hate speech and abusive language detection tasks by proposing a multi-task learning framework to deal with datasets across different annotation schemes, labels, or geographic and cultural influences from data sampling. Founta et al [7] built a unified classification model that can efficiently handle different types of abusive language such as cyberbullying, hate, sarcasm, etc. using raw text and domain-specific metadata from Twitter.…”
Section: Previous Workmentioning
confidence: 99%
“…Our model is also similar to the one by Founta et al (2018) with two differences. First, they assume that classes are provided with metadata, while our model extracts class descriptions directly from the training data.…”
Section: Related Workmentioning
confidence: 59%
“…The consequence of this decision is that the probability for a class is not independent of the other class probabilities, which would not be the desired behavior when dealing with a multi-label classification task. For instance, in a multi-label classification for hate speech detection, the probability of a Figure 1: The system architecture comment for being offensive is independent of its probability of being hateful, because an offensive tone can be used in a text that is not necessarily hateful (Founta et al, 2018). For this reason, instead of Softmax, we use the Sigmoid activation function…”
Section: Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…We checked the graphemic characteristics of the written text and we gave this as a feature to the model. We mainly used the non user related features defined in Founta et al (2018).…”
Section: Featuresmentioning
confidence: 99%