2019
DOI: 10.1002/poi3.223
|View full text |Cite
|
Sign up to set email alerts
|

Do Machines Replicate Humans? Toward a Unified Understanding of Radicalizing Content on the Open Social Web

Abstract: The advent of the Internet inadvertently augmented the functioning and success of violent extremist organizations. Terrorist organizations like the Islamic State in Iraq and Syria (ISIS) use the Internet to project their message to a global audience. The majority of research and practice on web‐based terrorist propaganda uses human coders to classify content, raising serious concerns such as burnout, mental stress, and reliability of the coded data. More recently, technology platforms and researchers have star… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(12 citation statements)
references
References 71 publications
0
12
0
Order By: Relevance
“…( 2020 ); Hall et al. ( 2020 ); Owoeye and Weir ( 2018 ); Macnair and Frank ( 2018 ); Figea et al. ( 2016 ); Scrivens and Frank ( 2016 ); Scrivens et al.…”
Section: Nlp Techniques For Extremism Researchmentioning
confidence: 99%
“…( 2020 ); Hall et al. ( 2020 ); Owoeye and Weir ( 2018 ); Macnair and Frank ( 2018 ); Figea et al. ( 2016 ); Scrivens and Frank ( 2016 ); Scrivens et al.…”
Section: Nlp Techniques For Extremism Researchmentioning
confidence: 99%
“…Given the high risks of incorrect flags that lead to takedown of innocent users and their content, auditing and evaluating AI approaches at use in content moderation is of significant concern, especially considering the demonstrable biases against women and minorities that studies of algorithms have revealed (Eubanks, 2018; Noble, 2018). While many projects have focused on how to detect extremist content, Hall, Logan, Ligon, and Derrick (2020) instead evaluate the performance of machines against human judgement, probing the limits of text‐based methods for the classification of extremism. They find that for jihadist content, approaches to detect extremist content with AI require significant work in integrating human understanding into machine abilities.…”
Section: Content Moderation and Takedownmentioning
confidence: 99%
“…While these approaches perform well for high‐level concepts, humans provide more granular analysis that identifies key themes and forms of content, such as emotion. By engaging in a validation of open‐source AI tools in detection of extremist content, Hall et al (2020) provide valuable advancements in research design and methodology that can be applied for future study; probing the possibilities and limits of technical systems in primary CVE, and identifying key challenges that software must surmount for it to be a viable alternative to human‐led moderation.…”
Section: Content Moderation and Takedownmentioning
confidence: 99%
See 1 more Smart Citation
“…Ligon and D.C. Derrick, authors of the work "Do Machines Replicate Humans? Toward a Unified Understanding of Radicalizing Content on the Open Social Web», the easiness of destructive content spreading, as well as a variety of its delivery methods provide radical communities with access to a massive audience, which allows solving the problems of searching and recruiting supporters, spreading ideologically significant information, attracting resources and providing general management [1]. Herewith, the Internet itself is not the cause of radicalization, but only a toolkit platform that provides previously unreachable massiveness and speed of social communications.…”
Section: Introductionmentioning
confidence: 99%