Proceedings of the Third Workshop on Abusive Language Online 2019
DOI: 10.18653/v1/w19-3507
|View full text |Cite
|
Sign up to set email alerts
|

The Discourse of Online Content Moderation: Investigating Polarized User Responses to Changes in Reddit’s Quarantine Policy

Abstract: Recent concerns over abusive behavior on their platforms have pressured social media companies to strengthen their content moderation policies. However, user opinions on these policies have been relatively understudied. In this paper, we present an analysis of user responses to a September 27, 2018 announcement about the quarantine policy on Reddit as a case study of to what extent the discourse on content moderation is polarized by users' ideological viewpoint. We introduce a novel partitioning approach for c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

2
30
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 37 publications
(32 citation statements)
references
References 23 publications
2
30
0
Order By: Relevance
“…On the other hand, there are several subjects, like institutions and ICT companies that need to comply with governments demands for counteracting hate speech online (see, for instance, the recently issued EU commission Code of Conduct on countering illegal hate speech online [4]). This generates an increasing necessity for automatic support to content moderation [5] or to monitor and map the diffusion of hate speech and its dynamics over a geographic territory [1], which is only possible on a large scale by employing computational methods. Moreover, having reliable methods to compute an index of online hate speech in relation to specific geo-times coordinates automatically opens the way to the possibility to investigate the interplay between the volume of hate speech messages and the socio-economic and demographic traditional indexes for a given area and period (see [6] for a preliminary proposal on the Italian case), or to study the impact of offline violent effects on hateful online messages [7].…”
Section: Related Workmentioning
confidence: 99%
“…On the other hand, there are several subjects, like institutions and ICT companies that need to comply with governments demands for counteracting hate speech online (see, for instance, the recently issued EU commission Code of Conduct on countering illegal hate speech online [4]). This generates an increasing necessity for automatic support to content moderation [5] or to monitor and map the diffusion of hate speech and its dynamics over a geographic territory [1], which is only possible on a large scale by employing computational methods. Moreover, having reliable methods to compute an index of online hate speech in relation to specific geo-times coordinates automatically opens the way to the possibility to investigate the interplay between the volume of hate speech messages and the socio-economic and demographic traditional indexes for a given area and period (see [6] for a preliminary proposal on the Italian case), or to study the impact of offline violent effects on hateful online messages [7].…”
Section: Related Workmentioning
confidence: 99%
“…The relationship between a social media artifact and various forms of established political knowledge can also be used to ground or validate ideology labels. Some examples of this include using author interactions with politicians with known party affiliations (Djemili et al, 2014;Barberá, 2015), ideological communities (Chandrasekharan et al, 2017;Shen and Rosé, 2019), and central users (Pennacchiotti and Popescu, 2011; as a starting heuristic, or evaluating a labeling approach by comparing geolocation tags attached to posts with historical voting patterns (Demszky et al, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…This makes it difficult to know if a model that performs well on one dataset will generalize well for other datasets. However, several actors -including institutions, NGO operators and ICT companies to comply to governments' demands for counteracting online abuse 1have an increasing need for automatic support to moderation (Shen and Rose, 2019;Chung et al, 2019) or for monitoring and mapping the dynamics and the diffusion of hate speech dynamics over a territory (Paschalides et al, 2020;Capozzi et al, 2019) considering different targets and vulnerable categories. In this scenario, there is a considerable urgency to investigate computational approaches for abusive language detection supporting the development of robust models, which can be used to detect abusive contents with different scope or topical focuses.…”
Section: Introductionmentioning
confidence: 99%