Findings of the Association for Computational Linguistics: ACL 2022 2022
DOI: 10.18653/v1/2022.findings-acl.87
|View full text |Cite
|
Sign up to set email alerts
|

Listening to Affected Communities to Define Extreme Speech: Dataset and Experiments

Abstract: Building on current work on multilingual hate speech (e.g., Ousidhoum et al. (2019)) and hate speech reduction (e.g., Sap et al. ( 2020)), we present XtremeSpeech, 1 a new hate speech dataset containing 20,297 social media passages from Brazil, Germany, India and Kenya.The key novelty is that we directly involve the affected communities in collecting and annotating the data -as opposed to giving companies and governments control over defining and combatting hate speech. This inclusive approach results in datas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 24 publications
0
1
0
Order By: Relevance
“…After the completion of this process, 50% of the annotated passages were cross-annotated by another factchecker to check the interannotator agreement scores. Cohen's kappa (κ, McHugh, 2012), Krippendorff 's alpha (α, Krippendorff, 2011), intraclass correlation coefficient (two-way mixed, average score ICC(3, k) for k = 2; Cicchetti, 1994) and accuracy were measured, which is the percentage of passages where both annotators agreed (Maronikolakis et al, 2022). For the three labels of derogatory, exclusionary and dangerous speech, we obtained the values of κ = 0.23, α = 0.24 and ICC(3, k) = 0.41, which is considered "fair" (Cicchetti, 1994;Maronikolakis et al, 2022).…”
Section: Methods and Datamentioning
confidence: 99%
“…After the completion of this process, 50% of the annotated passages were cross-annotated by another factchecker to check the interannotator agreement scores. Cohen's kappa (κ, McHugh, 2012), Krippendorff 's alpha (α, Krippendorff, 2011), intraclass correlation coefficient (two-way mixed, average score ICC(3, k) for k = 2; Cicchetti, 1994) and accuracy were measured, which is the percentage of passages where both annotators agreed (Maronikolakis et al, 2022). For the three labels of derogatory, exclusionary and dangerous speech, we obtained the values of κ = 0.23, α = 0.24 and ICC(3, k) = 0.41, which is considered "fair" (Cicchetti, 1994;Maronikolakis et al, 2022).…”
Section: Methods and Datamentioning
confidence: 99%
“…Noise audit While limited literature exists on investigating the generalizability of offensive speech detection systems across datasets (Arango et al, 2019), political discourse (Grimminger and Klinger, 2021;Maronikolakis et al, 2022), vulnerability to adversarial attacks (Gröndahl et al, 2018), unseen use cases , and geographic biases (Ghosh et al, 2021), to the best of our knowledge, no work exists on a comprehensive, in-the-wild evaluation of offensive speech filtering outcomes on large-scale, real-world political discussions. One key impediment to performing in-the-wild analysis of content moderation systems is a lack of ground truth.…”
Section: Definitionsmentioning
confidence: 99%
“…XtremeSpeech English [46]: The complete dataset contains 5,180 texts collected from Facebook, Twitter and WhatsApp. The dataset is not yet public, but the authors have kindly shared with us a subset of 2,639 texts written in English that focuses on Kenya as a geographic location.…”
Section: Datamentioning
confidence: 99%