2021
DOI: 10.1177/14614448211032310
|View full text |Cite
|
Sign up to set email alerts
|

Common sense or censorship: How algorithmic moderators and message type influence perceptions of online content deletion

Abstract: Hateful content online is a concern for social media platforms, policymakers, and the public. This has led high-profile content platforms, such as Facebook, to adopt algorithmic content-moderation systems; however, the impact of algorithmic moderation on user perceptions is unclear. We experimentally test the extent to which the type of content being removed (profanity vs hate speech) and the explanation given for its removal (no explanation vs link to community guidelines vs specific explanation) influence us… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(9 citation statements)
references
References 60 publications
0
8
0
1
Order By: Relevance
“…That implies that librarians have the obligation to let any voice be heard. Censorship, online (Gonçalves et al, 2021) or in physical libraries (Kann-Christensen and Pors, 2004) only makes some information more attractive for groups who leverage it to gain consensus. However, the transfer of knowledge and free formation of opinions also entails communicating facts not rumours, opinions or misinformation and critical points of view against the risks of some ideologies, propaganda and harmful information practices.…”
Section: Neutrality As An Act Of Changementioning
confidence: 99%
See 1 more Smart Citation
“…That implies that librarians have the obligation to let any voice be heard. Censorship, online (Gonçalves et al, 2021) or in physical libraries (Kann-Christensen and Pors, 2004) only makes some information more attractive for groups who leverage it to gain consensus. However, the transfer of knowledge and free formation of opinions also entails communicating facts not rumours, opinions or misinformation and critical points of view against the risks of some ideologies, propaganda and harmful information practices.…”
Section: Neutrality As An Act Of Changementioning
confidence: 99%
“…That implies that librarians have the obligation to let any voice be heard. Censorship, online (Gonçalves et al. , 2021) or in physical libraries (Kann-Christensen and Pors, 2004) only makes some information more attractive for groups who leverage it to gain consensus.…”
Section: Theoretical Frameworkmentioning
confidence: 99%
“…These features must be at a very granular level to be implemented for machine detection, which can be challenging in the context of content moderation (Molina et al, 2021). 2 When users think that AI is unable to detect subtleties of human language, they tend to perceive AI as unfit to classify usergenerated content, because incorrect classification can lead to content being taken down, an issue many believe to be a violation of freedom of their speech, tantamount to censorship (Gollatz et al, 2018;West, 2018).…”
Section: Perceptions Of Ai Versus Humanmentioning
confidence: 99%
“…Similarly, Jhaver et al's (2018) analysis revealed that one of the biggest challenges of human classification is that moderators have "different viewpoints and tolerance level, and what might offend one person may be perfectly reasonable to another" (p. 22). In other words, although humans do indeed have the capacity to contextualize and empathize with a post (Gollatz et al, 2018), users acknowledge that humans have their own biases and experiences that can also influence their decision-making (Jhaver et al, 2018). In fact, in one set of studies, participants expressed more acceptance of AI compared with humans for content moderation because decisions made by AI are at least based on the same constant rules being applied (Binns et al, 2018;Jhaver et al, 2018), and, thus, are "statistically fair" (Binns et al, 2018: 9).…”
Section: Perceptions Of Ai Versus Humanmentioning
confidence: 99%
“…Other forms of action delegations may imply that we let machines speak for us, in order to maintain decency of speech (Hancock, Naaman, & Levy, 2020). Puritan norms would require people to discipline themselves into suppressing emotions like anger or infatuation, so that their speech be free of hostility or innuendo; machine puritanism would give people leave to feel whatever they feel, in exchange for letting machines rewrite their emails, text messages, and social media posts to eliminate every trace of inappropriate speech (Gonçalves et al, 2021). In a more extreme form of this norm, people may be expected to let a machine block their communications if the machine detects that they are in too emotionally aroused a state.…”
Section: Introductionmentioning
confidence: 99%