Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) 2022
DOI: 10.18653/v1/2022.gebnlp-1.1
|View full text |Cite
|
Sign up to set email alerts
|

Analyzing Hate Speech Data along Racial, Gender and Intersectional Axes

Abstract: Warning: This work contains strong and offensive language, sometimes uncensored.To tackle the rising phenomenon of hate speech, efforts have been made towards data curation and analysis. When it comes to analysis of bias, previous work has focused predominantly on race. In our work, we further investigate bias in hate speech datasets along racial, gender and intersectional axes. We identify strong bias against African American English (AAE), masculine and AAE+Masculine tweets, which are annotated as disproport… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…A significant body of work that has examined intersectional hate detection has, for the most part, concentrated on addressing intersectional bias in hate speech datasets Maronikolakis et al (2022); Rankin and Thomas (2020); Kim (2020). (Chandra et al 2021) (Fitzsimons 2022) examined the quantification of intersectional injustice across several demographic groups on Twitter and discovered that the collection of intersectional data is grossly inadequate, and NLP is merely a piece in inherent biases in intersectional hate detections.…”
Section: Intersectional Hate Detectionmentioning
confidence: 99%
“…A significant body of work that has examined intersectional hate detection has, for the most part, concentrated on addressing intersectional bias in hate speech datasets Maronikolakis et al (2022); Rankin and Thomas (2020); Kim (2020). (Chandra et al 2021) (Fitzsimons 2022) examined the quantification of intersectional injustice across several demographic groups on Twitter and discovered that the collection of intersectional data is grossly inadequate, and NLP is merely a piece in inherent biases in intersectional hate detections.…”
Section: Intersectional Hate Detectionmentioning
confidence: 99%
“…When considering methods of mitigating bias in NLP, AAE's unique morphosyntactic structures are often neglected. Semantic context and lexical choice are more commonly accounted for (Barikeri et al, 2021;Cheng et al, 2022;Garimella et al, 2022;Hwang et al, 2020;Kiritchenko and Mohammad, 2018;Maronikolakis et al, 2022;Silva et al, 2021), but when focusing on improving a model's understanding of AAE, research often involves removing its morphological features (Tan et al, 2020) or translating between MAE and AAE (Ziems et al, 2023). In contrast, our work leverages AAE's morphosyntactic differences to improve disambiguation of habitual and non-habitual "be", rather than neutralizing the uniqueness of AAE.…”
Section: Related Workmentioning
confidence: 99%
“…gender and race, intersect with and reinforce each other. Drawing on this framework, Tan and Celis (2019a); May et al (2019); Lepori (2020); Maronikolakis et al (2022); Guo and Caliskan (2021) analyse the compounding effects of race and gender encoded in contextualised word representations and downstream tasks. Recently, Lalor et al (2022);Jiang and Fellbaum (2020) show the harmful implications of intersectionality effects in pre-trained language models.…”
Section: Related Workmentioning
confidence: 99%