Humans are remarkably efficient at parsing basic linguistic cues and show an equally impressive ability to produce and parse socially indexed cues from the language(s) they encounter. In this review, we focus on the ways in which questions of justice and equality are linked to these two abilities. We discuss how social and linguistic cues are theorized to become correlated with each other, describe listeners' perceptual abilities regarding linguistic and social cognition, and address how, in the context of these abilities, language mediates individuals’ negotiations with institutions and their agents—negotiations that often lead to discrimination or linguistic injustice. We review research that reports inequitable outcomes as a function of language use across education, employment, media, justice systems, housing markets, and health care institutions. Finally, we present paths forward for linguists to help fight against these discriminatory realities.
With three experiments, the present study investigated the primary phonological preparation (PP) unit in spoken word production in Korean. Adopting the form preparation paradigm, 23 native Korean speakers named pictures in homogenous or heterogenous lists. In homogenous lists, the names of the pictures shared the same initial phoneme (Experiment 1), initial consonant + vowel (i.e., CV) body (Experiment 2), or initial consonant + vowel + consonant (CVC) syllable (Experiment 3); and in heterogenous lists, the names did not share any phonological components systematically. Compared to naming pictures in heterogenous lists, participants’ naming speed was significantly faster when the initial body or the initial syllable of target names was shared. However, this form preparation effect was not shown in Experiment 1, when only the initial phoneme was shared. These results suggested that the body serves as the primary PP unit in Korean, that is, native Korean speakers tend to plan spoken words in a body–coda fashion, probably due to a joint contribution from the strong prevalence of the CV structure and early literacy instructional approach.
Research on bias in artificial intelligence has grown exponentially in recent years, especially around racial bias. Many modern technologies which impact people’s lives have been shown to have significant racial biases, including automatic speech recognition (ASR) systems. Emerging studies have found that widely-used ASR systems function much more poorly on the speech of Black people. Yet, this work is limited because it lacks a deeper consideration of the sociolinguistic literature on African American Language (AAL). In this paper, then, we seek to integrate AAL research into these endeavors to analyze ways in which ASRs might be biased against the linguistic features of AAL and how the use of biased ASRs could prove harmful to speakers of AAL. Specifically, we (1) provide an overview of the ways in which AAL has been discriminated against in the workforce and healthcare in the past, and (2) explore how introducing biased ASRs in these areas could perpetuate or even deepen linguistic discrimination. We conclude with a number of questions for reflection and future work, offering this document as a resource for cross-disciplinary collaboration.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.