2022
DOI: 10.1002/poi3.290
|View full text |Cite
|
Sign up to set email alerts
|

Safe from “harm”: The governance of violence by platforms

Abstract: A number of issues have emerged related to how platforms moderate and mitigate “harm.” Although platforms have recently developed more explicit policies in regard to what constitutes “hate speech” and “harmful content,” it appears that platforms often use subjective judgments of harm that specifically pertains to spectacular, physical violence—but harm takes on many shapes and complex forms. The politics of defining “harm” and “violence” within these platforms are complex and dynamic, and represent entrenched … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(13 citation statements)
references
References 60 publications
0
13
0
Order By: Relevance
“…The EU and UK are just two of a growing number of jurisdictions (Linklaters, 2021) pioneering regulation with the stated aims of pushing platforms to protect free expression whilst also transparently and consistently addressing the spread of harmful content and behaviour. The challenge is that the notion of 'harm' , much less ' online harm' , is not self-evident, and important critiques have focused on how '[online] harm' is being conceptualised by platforms (DeCook et al, 2022) and by those pushing for measures to address it (Nash, 2019a;Turillazzi et al, 2022, p. 10). The question of definition and scope has proven to be particularly contentious 8 within jurisdictions like the UK, which wanted to regulate not only illegal content and conduct that causes harm (e.g., speech that harasses or incites violence), but also 'legal but harmful' content and activity.…”
Section: The 'Online Harms' Debate and Humourmentioning
confidence: 99%
“…The EU and UK are just two of a growing number of jurisdictions (Linklaters, 2021) pioneering regulation with the stated aims of pushing platforms to protect free expression whilst also transparently and consistently addressing the spread of harmful content and behaviour. The challenge is that the notion of 'harm' , much less ' online harm' , is not self-evident, and important critiques have focused on how '[online] harm' is being conceptualised by platforms (DeCook et al, 2022) and by those pushing for measures to address it (Nash, 2019a;Turillazzi et al, 2022, p. 10). The question of definition and scope has proven to be particularly contentious 8 within jurisdictions like the UK, which wanted to regulate not only illegal content and conduct that causes harm (e.g., speech that harasses or incites violence), but also 'legal but harmful' content and activity.…”
Section: The 'Online Harms' Debate and Humourmentioning
confidence: 99%
“…DeCook, Cotter, Kanthawala and Foyle, in Safe From ‘Harm’: The Governance of Violence by Platforms (2021), explores how platforms, in a bid to mitigate ‘harm’ are ‘powerfully shape[ing] normative notions of harm and violence, effectively managing perceptions of their actions and directing users’ understanding of what is “harmful” and what is not’. This article contributes to the emerging work about the significance of user agency within these conversations, by highlighting the difficulties of specific communication nuances across social media platforms especially.…”
Section: Reconceptualisation and Regulationmentioning
confidence: 99%
“…The latter, for example, has been subject to a recent study by Rogers (Rogers, 2020), who traces the migration of "extreme influencers" from Twitter to Telegram using user status metrics and content engagement available through both platforms' APIs. Studies on content moderation rely heavily on close readings of contemporary versions of content moderation policies, with few exceptions relying on versions archived by the Wayback Machine for longitudinal studies (see DeCook et al, 2022). These approaches have yet to be systematically combined, and are still prey to changing APIs and reprisals from platforms cramping down on researchers who must scrape information not accessible otherwise (Bond, 2021).…”
Section: A Digital Forensics For Empirical Content Moderation Researchmentioning
confidence: 99%
“…Hateful Conduct Galtung (1990) argues that cultural violence-which can be based on religion, ideology, and science, to name only a few categories he covers-is intimately linked with structural and direct violence, referring to it as "a substratum from which the other two can derive their nutrients" (see also DeCook et al, 2022). However, hateful conduct is not usually legally circumscribed, except in specific cases (eg.…”
Section: Counter-terrorism and Counter-extremismmentioning
confidence: 99%
See 1 more Smart Citation