2019
DOI: 10.1145/3359265
|View full text |Cite
|
Sign up to set email alerts
|

Content Removal as a Moderation Strategy

Abstract: Moderators of online communities often employ comment deletion as a tool. We ask here whether, beyond the positive effects of shielding a community from undesirable content, does comment removal actually cause the behavior of the comment's author to improve? We examine this question in a particularly well-moderated community, the ChangeMyView subreddit.The standard analytic approach of interrupted time-series analysis unfortunately cannot answer this question of causality because it fails to distinguish the ef… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 45 publications
(12 citation statements)
references
References 29 publications
0
12
0
Order By: Relevance
“…Good moderation is capable of warning users ahead of time in a stickied thread that toxic comments will not be tolerated, directly removing toxic comments, and banning frequent offenders. The results of a study by Srinivasan et al (2019) showed that effective moderation, although not a significant contributor to future reformed behaviors, did significantly contribute to reduced future rates of toxic comments.…”
Section: Mitigation Of Toxicitymentioning
confidence: 97%
“…Good moderation is capable of warning users ahead of time in a stickied thread that toxic comments will not be tolerated, directly removing toxic comments, and banning frequent offenders. The results of a study by Srinivasan et al (2019) showed that effective moderation, although not a significant contributor to future reformed behaviors, did significantly contribute to reduced future rates of toxic comments.…”
Section: Mitigation Of Toxicitymentioning
confidence: 97%
“…We echoed some moderation strategies broadly applied in online spaces such as content removal and banning the end-user [31,54]. However, most of these common strategies are working separately.…”
Section: Synchronicity Enhances the Graduated Moderation And Amplifes The Violator's Voicementioning
confidence: 99%
“…For example, deplatformization seems effective to stem toxic communication (Jhaver et al, 2021), while limiting access to toxic communities slowed down new member recruitment but did not affect established toxic communication within the community (Chandrasekharan et al, 2022). Content removal may effectively restore platform guideline compliance, yet it does not appear to affect overall platform behavior (Srinivasan et al, 2019). One solution might be a collaborative content moderation between AI and humans (Lai et al, 2022), mainly because AI-based content moderation on social media appears promising only from a technological perspective (Gillespie, 2020) but remains intransparent (Suzor et al, 2019), hard to understand, and largely unable to capture communicative context on the user side (Gorwa et al, 2020).…”
Section: Background and Related Workmentioning
confidence: 99%