Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of linguistic input and for deaf individuals is highly heterogeneous, which is rarely the case for hearing learners of spoken languages. Little is known about how these modality and developmental factors affect real-time lexical processing. In this study, we ask how these factors impact real-time recognition of American Sign Language (ASL) signs using a novel adaptation of the visual world paradigm in deaf adults who learned sign from birth (Experiment 1), and in deaf individuals who were late-learners of ASL (Experiment 2). Results revealed that although both groups of signers demonstrated rapid, incremental processing of ASL signs, only native-signers demonstrated early and robust activation of sub-lexical features of signs during real-time recognition. Our findings suggest that the organization of the mental lexicon into units of both form and meaning is a product of infant language learning and not the sensory and motor modality through which the linguistic signal is sent and received.
Joint attention between hearing children and their caregivers is typically achieved when the adult provides spoken, auditory linguistic input that relates to the child’s current visual focus of attention. Deaf children interacting through sign language must learn to continually switch visual attention between people and objects in order to achieve the classic joint attention characteristic of young hearing children. The current study investigated the mechanisms used by sign language dyads to achieve joint attention within a single modality. Four deaf children, ages 1;9 to 3;7, were observed during naturalistic interactions with their deaf mothers. The children engaged in frequent and meaningful gaze shifts, and were highly sensitive to a range of maternal cues. Children’s control of gaze in this sample was largely developed by age two. The gaze patterns observed in deaf children were not observed in a control group of hearing children, indicating that modality-specific patterns of joint attention behaviors emerge when the language of parent-infant interaction occurs in the visual mode.
Congenitally deaf individuals receive little or no auditory input, and when raised by deaf parents, they acquire sign as their native and primary language. We asked two questions regarding how the deaf brain in humans adapts to sensory deprivation: (1) Is meaning extracted and integrated from signs using the same classical left hemisphere fronto-temporal network used for speech in hearing individuals, and (2) in deafness, is superior temporal cortex encompassing primary and secondary auditory regions reorganized to receive and process visual sensory information at short latencies? Using magnetoencephalography (MEG) constrained by individual cortical anatomy obtained with magnetic resonance imaging (MRI), we examined an early time window associated with sensory processing and a late time window associated with lexico-semantic integration. We found that sign in deaf individuals and speech in hearing individuals activate a highly similar left fronto-temporal network (including superior temporal regions surrounding auditory cortex) during lexico-semantic processing, but only speech in hearing individuals activates auditory regions during sensory processing. Thus, neural systems dedicated to processing high-level linguistic information are utilized for processing language regardless of modality or hearing status, and we do not find evidence for re-wiring of afferent connections from visual systems to auditory cortex.
The relation between the timing of language input and development of neural organization for language processing in adulthood has been difficult to tease apart because language is ubiquitous in the environment of nearly all infants. However, within the congenitally deaf population are individuals who do not experience language until after early childhood. Here, we investigated the neural underpinnings of American Sign Language (ASL) in 2 adolescents who had no sustained language input until they were approximately 14 years old. Using anatomically constrained magnetoencephalography, we found that recently learned signed words mainly activated right superior parietal, anterior occipital, and dorsolateral prefrontal areas in these 2 individuals. This spatiotemporal activity pattern was significantly different from the left fronto-temporal pattern observed in young deaf adults who acquired ASL from birth, and from that of hearing young adults learning ASL as a second language for a similar length of time as the cases. These results provide direct evidence that the timing of language experience over human development affects the organization of neural language processing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.