Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.487
|View full text |Cite
|
Sign up to set email alerts
|

Social Biases in NLP Models as Barriers for Persons with Disabilities

Abstract: Building equitable and inclusive NLP technologies demands consideration of whether and how social attitudes are represented in ML models. In particular, representations encoded in models often inadvertently perpetuate undesirable social biases from the data on which they are trained. In this paper, we present evidence of such undesirable biases towards mentions of disability in two different English language models: toxicity prediction and sentiment analysis. Next, we demonstrate that the neural embeddings tha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
125
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 170 publications
(126 citation statements)
references
References 29 publications
0
125
0
1
Order By: Relevance
“…Although widely used, the PERSPECTIVE API and other hate speech detection systems and corpora exhibit biases against minorities and suffer from low agreement in annotations (Waseem, 2016;Ross et al, 2017), partially due to annotator identity influencing their perception of hate speech (Cowan and Khatchadourian, 2003) and differences in annotation task setup (Sap et al, 2019). Notably, recent work has found that systems are overestimating the prevalence of toxicity in text that contains a minority identity mention (e.g., "I'm a gay man"; Hutchinson et al, 2020) or text by racial minorities (e.g., text in African American English; Sap et al, 2019;Davidson et al, 2019). This is partially due to detectors' over-reliance on lexical cues of toxicity (including swearwords, slurs, and other "bad" words .…”
Section: Biases In Toxic Language Detectionmentioning
confidence: 99%
“…Although widely used, the PERSPECTIVE API and other hate speech detection systems and corpora exhibit biases against minorities and suffer from low agreement in annotations (Waseem, 2016;Ross et al, 2017), partially due to annotator identity influencing their perception of hate speech (Cowan and Khatchadourian, 2003) and differences in annotation task setup (Sap et al, 2019). Notably, recent work has found that systems are overestimating the prevalence of toxicity in text that contains a minority identity mention (e.g., "I'm a gay man"; Hutchinson et al, 2020) or text by racial minorities (e.g., text in African American English; Sap et al, 2019;Davidson et al, 2019). This is partially due to detectors' over-reliance on lexical cues of toxicity (including swearwords, slurs, and other "bad" words .…”
Section: Biases In Toxic Language Detectionmentioning
confidence: 99%
“…However, the mismatch between the construct of toxicity and its operationalization through an automatic classifier can cause biased or unintended model behavior (Jacobs and Wallach, 2021). Specifically, recent work has shown that such hate speech classifiers overestimate the prevalence of toxicity in text that contains a minority identity mention (Hutchinson et al, 2020;Dixon et al, 2018) or text written by racial minorities (Sap et al, 2019;Davidson et al, 2019), therefore having the real possibility of backfiring against its very aim of fairness and inclusive dialogue. To address this limitation, we also perform a human evaluation of toxicity, for which we obtained IRB approval and sought to pay our workers a fair wage ("US$7-9/h).…”
Section: Broader Impact and Ethical Implicationsmentioning
confidence: 99%
“…We al have the extensive study by Leavy et al [74] on CBOW trained on articles from The Guardian journal and the British Digital Library. Also, Hutchinson et al [82] studies the perception of models towards disabled people and Bhardwaj et al [78] combines the study of gender bias on BERT by sentiment analysis with gender separability.…”
Section: Association Testsmentioning
confidence: 99%