2018
DOI: 10.1186/s40163-018-0089-1
|View full text |Cite
|
Sign up to set email alerts
|

Hate is in the air! But where? Introducing an algorithm to detect hate speech in digital microenvironments

Abstract: With the objective of facilitating and reducing analysis tasks undergone by law enforcement agencies and service providers, and using a sample of digital messages (i.e., tweets) sent via Twitter following the June 2017 London Bridge terror attack (N = 200,880), the present study introduces a new algorithm designed to detect hate speech messages in cyberspace. Unlike traditional designs based on semantic and syntactic approaches, the algorithm hereby implemented feeds solely on metadata, achieving high level of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
12
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(12 citation statements)
references
References 36 publications
0
12
0
Order By: Relevance
“…Paper [24] approached a method for hate speech detection in digital microenvironments. The combination of the people (i.e., accounts), who say things (i.e., tweets) to other people (i.e., other accounts), is the definition of digital microenvironments in cyberspace.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Paper [24] approached a method for hate speech detection in digital microenvironments. The combination of the people (i.e., accounts), who say things (i.e., tweets) to other people (i.e., other accounts), is the definition of digital microenvironments in cyberspace.…”
Section: Resultsmentioning
confidence: 99%
“…But where? Introducing an algorithm to detect hate speech in digital microenvironments[24] Springer Link 2020 Developing an online hate classifier for multiple social media platforms…”
mentioning
confidence: 99%
“…The interplay between the police practice and research seems central to create evidence-based strategies needed to move away from unsafety and violence surrounding the ODS. Esteve, Miró-Linares & Rabasa, 2018;Miró-Llinares, Moneva, & Esteve, 2018). In general terms, Big data allows us to collect data as they are generated and compared over time (Bello-Orgaz, Jung, & Camacho, 2016;McAfee, Brynjolfsson, Davenport, Patil & Barton, 2012).…”
Section: Research and Policy Recommendationsmentioning
confidence: 99%
“…Most current approaches include user data and other metadata in the analysis in order to improve the models' accuracy (Mathew et al, 2019;Ribeiro et al, 2018;Waseem and Hovy, 2016;Miró-Llinares et al, 2018;Stoop et al, 2019). However, the focus of these models lies on identifying hateful user accounts and environments, rather than hateful content per se.…”
Section: Introduction and Related Workmentioning
confidence: 99%