2020
DOI: 10.1177/2053951720943234
|View full text |Cite
|
Sign up to set email alerts
|

Content moderation, AI, and the question of scale

Abstract: AI seems like the perfect response to the growing challenges of content moderation on social media platforms: the immense scale of the data, the relentlessness of the violations, and the need for human judgments without wanting humans to have to make them. The push toward automated content moderation is often justified as a necessary response to the scale: the enormity of social media platforms like Facebook and YouTube stands as the reason why AI approaches are desirable, even inevitable. But even if we could… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
160
0
5

Year Published

2021
2021
2022
2022

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 238 publications
(166 citation statements)
references
References 16 publications
1
160
0
5
Order By: Relevance
“…Our results shed light into this new genre, which is beginning to be explored not just from a linguistic point of view, but also from the point of view of content moderation (Gillespie, 2018(Gillespie, , 2020Risch & Krestel, 2018;Seering, Wang, Yoon & Kaufman 2019).…”
Section: Discussionmentioning
confidence: 86%
“…Our results shed light into this new genre, which is beginning to be explored not just from a linguistic point of view, but also from the point of view of content moderation (Gillespie, 2018(Gillespie, , 2020Risch & Krestel, 2018;Seering, Wang, Yoon & Kaufman 2019).…”
Section: Discussionmentioning
confidence: 86%
“…Based on his recent research into AI data workers in South America, Schmidt (2019) has pointed out that most of the value is in validation data, where ‘human cognition is needed to evaluate the decisions that machine-learning systems have made’ (9). Moreover, research has pointed out how Facebook obscures its human content moderators (Gillespie, 2020), and how Google uses human raters to perform quality assessments (Bilić, 2016). The type of validation work is also found in the use of human labor in home assistants, such as Amazon Echo and Google Home (Day et al., 2019; Verheyden et al., 2019).…”
Section: The Invisible Backstage Of Ai Productionmentioning
confidence: 99%
“…The expanding volume and velocity of user participation makes it increasingly difficult and expensive to rapidly detect and remove undesirable posts (Sood et al, 2012;Gillespie, 2020). Fully automated Machine Learning (ML) approaches for text classifications have shown remarkable improvements over the last years.…”
Section: Introductionmentioning
confidence: 99%
“…Fully automated Machine Learning (ML) approaches for text classifications have shown remarkable improvements over the last years. However, ML models still lack user acceptance and applicability (Brunk et al, 2019;Gillespie, 2020). Fully automated approaches are known to be error-prone (Scharkow, 2013) and rarely reach the level of accuracy required to be applied in real-word settings.…”
Section: Introductionmentioning
confidence: 99%