2021
DOI: 10.1007/s42380-021-00105-7
|View full text |Cite
|
Sign up to set email alerts
|

AI Content Moderation, Racism and (de)Coloniality

Abstract: The article develops a critical approach to AI in content moderation adopting a decolonial perspective. In particular, the article asks: to what extent does the current AI moderation system of platforms address racist hate speech and discrimination? Based on a critical reading of publicly available materials and publications on AI in content moderation, we argue that racialised people have no significant input in the definitions and decision making processes on racist hate speech and are also exploited as thei… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(1 citation statement)
references
References 22 publications
0
1
0
Order By: Relevance
“…This research illuminates the consequences of algorithmic in visibility—how algorithmic systems, in this case social media content moderating systems, deny access to visibility and engagement to users with marginalized identities and how those impacted theorize the mechanisms and motivations around this invisibility (Bucher, 2012; Cotter, 2019). It is, however, important to note that visibility can lead to exposure to harm, therefore leaving vulnerable groups experiencing harm from both invisibility and hypervisibility (Dinar, 2021; Díaz & Hecht-Felella, 2021; Marshall, 2021; Siapera, 2022). Counterbalancing the empirically proven algorithmic oppression of historically marginalized identities on digital platforms (Noble, 2018), “algorithmic privilege” has emerged as a framework for understanding users who are “positioned to benefit from how an algorithm operates on the basis of identity” (Karizat et al, 2021, p. 3).…”
Section: Folk Theorization Of Social Media Usementioning
confidence: 99%
“…This research illuminates the consequences of algorithmic in visibility—how algorithmic systems, in this case social media content moderating systems, deny access to visibility and engagement to users with marginalized identities and how those impacted theorize the mechanisms and motivations around this invisibility (Bucher, 2012; Cotter, 2019). It is, however, important to note that visibility can lead to exposure to harm, therefore leaving vulnerable groups experiencing harm from both invisibility and hypervisibility (Dinar, 2021; Díaz & Hecht-Felella, 2021; Marshall, 2021; Siapera, 2022). Counterbalancing the empirically proven algorithmic oppression of historically marginalized identities on digital platforms (Noble, 2018), “algorithmic privilege” has emerged as a framework for understanding users who are “positioned to benefit from how an algorithm operates on the basis of identity” (Karizat et al, 2021, p. 3).…”
Section: Folk Theorization Of Social Media Usementioning
confidence: 99%