2019
DOI: 10.1017/s1744552319000119
|View full text |Cite
|
Sign up to set email alerts
|

Regulating terrorist content on social media: automation and the rule of law

Abstract: Social-media companies make extensive use of artificial intelligence in their efforts to remove and block terrorist content from their platforms. This paper begins by arguing that, since such efforts amount to an attempt to channel human conduct, they should be regarded as a form of regulation that is subject to rule-of-law principles. The paper then discusses three sets of rule-of-law issues. The first set concerns enforceability. Here, the paper highlights the displacement effects that have resulted from the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 30 publications
(20 citation statements)
references
References 29 publications
0
18
0
Order By: Relevance
“…However, not all levels of online interaction are created equal. In recent years, there have been a number of studies which discuss terrorists' utilization of end-to-end encrypted platforms, particularly as they have been pushed away from mainstream social media sites (Conway et al, 2018;Macdonald, Correia, & Watkin, 2019). These platforms offer a higher level of operational security and, by their nature, are not compliant to subpoena.…”
Section: Online Activity As An Impediment To Successmentioning
confidence: 99%
“…However, not all levels of online interaction are created equal. In recent years, there have been a number of studies which discuss terrorists' utilization of end-to-end encrypted platforms, particularly as they have been pushed away from mainstream social media sites (Conway et al, 2018;Macdonald, Correia, & Watkin, 2019). These platforms offer a higher level of operational security and, by their nature, are not compliant to subpoena.…”
Section: Online Activity As An Impediment To Successmentioning
confidence: 99%
“…The results indicate that the textual models using vector embedding features significantly improve the detection over TF-IDF features [35]. Stuart and others [15] highlight the displacement effects that result from the automated removal and blocking of terrorist content and suggest that regard must be implemented to the whole social-media ecology covering jihadist groups other than the so-called Islamic State and other forms of violent extremist organizations. Since rule by law is only a necessary and not a sufficient, condition for compliance with rule-of-law values, the study examines two further sets of issues including the clarity with which social media companies define terrorist content and the adequacy of the processes by which a user may appeal against an account suspension or the blocking or removal of content.…”
Section: Related Workmentioning
confidence: 91%
“…The challenges related to the domains of sentiment analysis and text classification are widely studied in the literature [15], [14]. Shah et al create a multilingual sentiment lexicon with intensity weights to classify social media contents as high extreme, low extreme, moderate, and neutral.…”
Section: Related Workmentioning
confidence: 99%
“…In 23 months, starting from August 2015, Twitter suspended about a million accounts for promoting violence (Fernandez et al, 2018;Singh et al, 2018). In the latter half of 2017, YouTube deleted 150,000 videos spreading violence and extremism and about half of these videos were removed within 2 h of being uploaded (Macdonald, 2018;Macdonald et al, 2019).…”
Section: Artificial Intelligence (Ai)mentioning
confidence: 99%