2022
DOI: 10.1177/20563051221117552
|View full text |Cite
|
Sign up to set email alerts
|

Do Not Recommend? Reduction as a Form of Content Moderation

Abstract: Public debate about content moderation has overwhelmingly focused on removal: social media platforms deleting content and suspending users, or opting not to do so. However, removal is not the only available remedy. Reducing the visibility of problematic content is becoming a commonplace element of platform governance. Platforms use machine learning classifiers to identify content they judge misleading enough, risky enough, or offensive enough that, while it does not warrant removal according to the site guidel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
39
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 75 publications
(39 citation statements)
references
References 62 publications
0
39
0
Order By: Relevance
“…Governance by platforms includes the massive content moderation apparatuses, outsourced in large part to contracting firms, that are responsible for detecting and reviewing content uploaded to social media platforms that violate the platform's community guidelines (Gillespie, 2018; Roberts, 2019). Both academic and industry reports tend to disproportionately frame the outcomes of content moderation as a process of content removal instead of the more common reduction in visibility, wherein content remains available on a platform but must be manually located rather than algorithmically curated for users (Gillespie, 2022; Zeng & Kaye, 2022).…”
Section: Content Moderation On Social Media Platformsmentioning
confidence: 99%
See 1 more Smart Citation
“…Governance by platforms includes the massive content moderation apparatuses, outsourced in large part to contracting firms, that are responsible for detecting and reviewing content uploaded to social media platforms that violate the platform's community guidelines (Gillespie, 2018; Roberts, 2019). Both academic and industry reports tend to disproportionately frame the outcomes of content moderation as a process of content removal instead of the more common reduction in visibility, wherein content remains available on a platform but must be manually located rather than algorithmically curated for users (Gillespie, 2022; Zeng & Kaye, 2022).…”
Section: Content Moderation On Social Media Platformsmentioning
confidence: 99%
“…Mass reporting can be frustrating for casual users of social media platforms, but for full‐time social media creators, the threat of content takedowns, account suspensions and bans, and reductions in visibility through algorithmic recommendations can have severe and lasting consequences for their professional images, financial livelihoods, and emotional well‐being (Are, 2022; Bishop, 2019; Cotter, 2021; Duffy & Meisner, 2023). Research on content takedowns in the creator economy rightly addresses unfairness in automated content moderation (e.g., algorithmic reduction; see Gillespie, 2022), though considerably less is known about the experiences of creators attempting to manage abuses of the infrastructures for platform governance, such as the role of user‐driven flagging in content reporting and moderation (Crawford & Gillespie, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…Virtually all social media networks have established rules of conduct and acceptable content for their services. They have a range of techniques at their disposal to enforce these standards, such as deleting [40,58], downranking [31], quarantining [11,56], labeling [69], or demonetizing [15] undesirable content, or temporarily or permanently banning the accounts of users who repeatedly post violating content ("deplatforming") [19,39,51].…”
Section: Content Moderationmentioning
confidence: 99%
“…The 15 most popular social media platforms in the United States each have different definitions of harassment in their community policies (Pater et al, 2016) reflecting not only a technical problem for platform operations but also a challenge to platforms' abilities to recognize and repair harm for the diverse communities they aim to serve (Schoenebeck & Blackwell, 2021). Given that all platforms must moderate the glut of content uploaded to their sites every second (Gillespie, 2018;Roberts, 2019), most major platforms have implemented extensive content moderation apparatuses that remove content and accounts as well as algorithmic recommendation systems that render some content and accounts more or less visible (Gillespie, 2022). Content moderation-especially algorithmic content moderation (Gorwa et al, 2020)-not typically embraced content and cultural expressions deemed non-normative, including from already marginalized groups (Duffy & Meisner, 2022;Thach et al, 2022).…”
Section: Governing Harm and Toxicity On Social Mediamentioning
confidence: 99%