2019
DOI: 10.1007/s10676-019-09516-z
|View full text |Cite
|
Sign up to set email alerts
|

Quarantining online hate speech: technical and ethical perspectives

Abstract: In this paper we explore quarantining as a more ethical method for delimiting the spread of Hate Speech via online social media platforms. Currently, companies like Facebook, Twitter, and Google generally respond reactively to such material: offensive messages that have already been posted are reviewed by human moderators if complaints from users are received. The offensive posts are only subsequently removed if the complaints are upheld; therefore, they still cause the recipients psychological harm. In additi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
55
0
2

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 78 publications
(57 citation statements)
references
References 25 publications
0
55
0
2
Order By: Relevance
“…Today, a large part of human communication takes place in the digital sphere, for instance via social media [1][2], and so does hate speech, which can be harmful for individuals and society as a whole [3]. Ullmann and Tomalin [4], for instance, describe that "[…] offensive posts are only subsequently removed if the complaints are upheld, therefore, they still cause the recipients psychological harm." (p. 1).…”
Section: Introductionmentioning
confidence: 99%
“…Today, a large part of human communication takes place in the digital sphere, for instance via social media [1][2], and so does hate speech, which can be harmful for individuals and society as a whole [3]. Ullmann and Tomalin [4], for instance, describe that "[…] offensive posts are only subsequently removed if the complaints are upheld, therefore, they still cause the recipients psychological harm." (p. 1).…”
Section: Introductionmentioning
confidence: 99%
“…As an example, our classification model can be directly deployed to practical applications as well as further developed by other researchers. Regarding the use of the model in real systems (e.g., to automate moderation), we repeat the advice from a previous study [77] that essentially states that even small misclassification rates are a problem, as removing comments based on automatic detection methods can impact a user's freedom of speech in social media platforms [104]. It is highly unlikely that "perfect" classifiers for online hate would ever be developed, especially considering the subjective nature of what online hate is [72,75].…”
Section: Practical Implicationsmentioning
confidence: 99%
“…(p. 8). For these reasons, ethical considerations in online hate detection are important [104]. Therefore, we do not advocate letting the model automatically decide on banning or removal of messages (perhaps apart from situations where false positives play only a smaller role).…”
Section: Practical Implicationsmentioning
confidence: 99%
“…In addition, this approach has frequently been criticised for delimiting freedom of expression, since it requires the service providers to elaborate and implement censorship regimes." [5]. Berbagai bentuk ungkapan yang menyakitkan dalam bentuk ekspresi maupun dengan kondisi yang lebih merugikan lagi.…”
Section: Pendahuluanunclassified