2022
DOI: 10.1177/20539517221115666
|View full text |Cite
|
Sign up to set email alerts
|

Shall AI moderators be made visible? Perception of accountability and trust in moderation systems on social media platforms

Abstract: This study examines how visibility of a content moderator and ambiguity of moderated content influence perception of the moderation system in a social media environment. In the course of a two-day pre-registered experiment conducted in a realistic social media simulation, participants encountered moderated comments that were either unequivocally harsh or ambiguously worded, and the source of moderation was either unidentified, or attributed to other users or an automated system (AI). The results show that when… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 55 publications
0
2
0
Order By: Relevance
“…For instance, warning labels are more effective when attributed to fact-checkers than layperson crowds (38). Algorithmic content moderation also suffers from low perceived confidencelabels attributed to AI are even less effective than those from the public (38), and visible AI moderator systems are subject to less perceived trust and accountability in the moderation process (39). Furthermore, a recent experiment of 100 Facebook users found that moderation decisions made by expert panels were perceived as more legitimate than decisions from layperson juries or algorithms (40).…”
Section: Significance Statementmentioning
confidence: 99%
“…For instance, warning labels are more effective when attributed to fact-checkers than layperson crowds (38). Algorithmic content moderation also suffers from low perceived confidencelabels attributed to AI are even less effective than those from the public (38), and visible AI moderator systems are subject to less perceived trust and accountability in the moderation process (39). Furthermore, a recent experiment of 100 Facebook users found that moderation decisions made by expert panels were perceived as more legitimate than decisions from layperson juries or algorithms (40).…”
Section: Significance Statementmentioning
confidence: 99%
“…Greater consistency may lead to increased trust and improved discourse [35]. Unfortunately, public perception is that content moderation is inconsistently implemented [46,49]. Keller et al [34] also show it is often difficult to determine where, how and according to what rules content moderation is occurring.…”
Section: Introductionmentioning
confidence: 99%