2022
DOI: 10.1093/applin/amac066
|View full text |Cite
|
Sign up to set email alerts
|

Bias in Automatic Speech Recognition: The Case of African American Language

Abstract: Research on bias in artificial intelligence has grown exponentially in recent years, especially around racial bias. Many modern technologies which impact people’s lives have been shown to have significant racial biases, including automatic speech recognition (ASR) systems. Emerging studies have found that widely-used ASR systems function much more poorly on the speech of Black people. Yet, this work is limited because it lacks a deeper consideration of the sociolinguistic literature on African American Languag… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(3 citation statements)
references
References 49 publications
0
3
0
Order By: Relevance
“…Adamson, 2023;Gillham, 2024). This finding resonates, in a different but related scenario, with the view that some studies have established that currently available automatic speech recognition technologies poorly detect, if any, and discriminate against the English spoken by Black people, especially African American Language (AAL), thereby exposing their racial bias and demographic discrimination against this type of English (Martin & Wright, 2023). Linguistic and racial biases are but two of the instances of bias that GenAI models, and not just AI detection models, have to contend with in their everyday deployment.…”
Section: Introductionmentioning
confidence: 78%
“…Adamson, 2023;Gillham, 2024). This finding resonates, in a different but related scenario, with the view that some studies have established that currently available automatic speech recognition technologies poorly detect, if any, and discriminate against the English spoken by Black people, especially African American Language (AAL), thereby exposing their racial bias and demographic discrimination against this type of English (Martin & Wright, 2023). Linguistic and racial biases are but two of the instances of bias that GenAI models, and not just AI detection models, have to contend with in their everyday deployment.…”
Section: Introductionmentioning
confidence: 78%
“…Though the actual abilities of these tools may be overhyped 92,93 , we may be witnessing the emergence of a new benchmark against which human language use is measured. Concerningly, the biases embedded in these tools, which take the linguistic practices of socially powerful groups as the baseline for NATIVE, "proficient," or "human-like" language use, reinforce linguistic discrimination and contribute to the dehumanization of marginalized communities 92,94,95 .…”
Section: Harm To Scientific Communitymentioning
confidence: 99%
“…For example, Carter and Callesano (2018) note that different national dialects of Spanish (Cuban, Colombian, and Peninsular) are associated with different levels of occupational prestige and socioeconomic status. African American English (AAE) has long been studied in terms of materialized perceptions, for example in the workplace (McCluney et al 2021), the court system (Rickford and King 2016), and automatic speech recognition programs (Martin and Wright 2023). As such, the perception of minoritized linguistic varieties-and the bodies that produce them-materialize as not eligible for higher paying employment, not credible in legal environments, and disadvantaged within technological systems, underscoring the ways in which marginalized Latinx bilingual language practices are policed.…”
Section: Us Latinx Language and Ideologymentioning
confidence: 99%