Social-media companies make extensive use of artificial intelligence in their efforts to remove and block terrorist content from their platforms. This paper begins by arguing that, since such efforts amount to an attempt to channel human conduct, they should be regarded as a form of regulation that is subject to rule-of-law principles. The paper then discusses three sets of rule-of-law issues. The first set concerns enforceability. Here, the paper highlights the displacement effects that have resulted from the automated removal and blocking of terrorist content and argues that regard must be had to the whole social-media ecology, as well as to jihadist groups other than the so-called Islamic State and other forms of violent extremism. Since rulebylaw is only a necessary, and not a sufficient, condition for compliance with rule-of-law values, the paper then goes on to examine two further sets of issues: the clarity with which social-media companies define terrorist content and the adequacy of the processes by which a user may appeal against an account suspension or the blocking or removal of content. The paper concludes by identifying a range of research questions that emerge from the discussion and that together form a promising and timely research agenda to which legal scholarship has much to contribute.
Scholars have been arguing for years that responses to terrorist content on tech platforms have, to-date, been inadequate. Past responses have been reactive and fragmented with tech platforms self-regulating. Over the last few years, many governments began to decide that the self-regulatory approach was not working. As a result, a number of regulatory frameworks have been proposed and/or implemented. However, they have been highly criticised. The purpose of this thesis is to propose a new regulatory framework to counter terrorist content on tech platforms and overcome many of these criticisms. Scholars have argued that it is vital that future regulation be informed by past experience and supported by evidence from prior research. Therefore, a number of steps were taken. First, this thesis examines a review of literature into what platforms are exploited by terrorist organisations. Next, a content analysis was undertaken on blogposts that tech platforms publish in order to investigate the efforts that tech platforms report making to counter terrorist content on their services and the challenges that they face. Third, a sample of existing or currently proposed regulatory frameworks were examined in order to learn what was done well and what gaps, limitations and challenges exist that require addressing in future regulation. Finally, social regulation theory was identified as applicable in this regulatory context. Social regulation strategies were examined in three other regulatory contexts in order to examine whether they could be used in this regulatory context. The findings from the above analyses were used to inform a new regulatory framework that is proposed in this thesis. In addition to proposing a new regulatory framework, this thesis also identified three compliance issues that tech platforms may face. These compliance issues are addressed alongside the proposal of the framework. Overall, it is argued that previous regulatory attempts failed to consider the diverse array of challenges that are faced by different platforms when countering terrorist content. The regulatory framework proposed in this thesis researched these challenges and identified strategies from a social regulation approach, learning lessons from how they were applied elsewhere to overcome some of the key criticisms and limitations of existing regulatory practice.
Her research expertise lies at the intersection of Digital Discourse Analysis and Criminology. Relevant funded projects she leads include examining radical right groups' use of social media, profiling online sexual groomers' language use and exploring constructions of trust in crypto-drug markets.
This article reports and discusses the results of a study that investigated photographic images of children in five online terrorist magazines to understand the roles of children in these groups. The analysis encompasses issues of Inspire, Dabiq, Jihad Recollections (JR), Azan, and Gaidi Mtanni (GM) from 2009 to 2016. The total number of images was ninety-four. A news value framework was applied that systematically investigated what values the images held that resulted in them being "newsworthy" enough to be published. This article discusses the key findings, which were that Dabiq distinguished different roles for boys and girls, portrayed fierce and prestigious boy child perpetrators, and children flourishing under the caliphate; Inspire and Azan focused on portraying children as victims of Western-backed warfare; GM portrayed children supporting the cause peacefully; and JR contained no re-occurring findings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.