Findings from recent studies suggest that spoken-language bilinguals engage nonlinguistic inhibitory control mechanisms to resolve cross-linguistic competition during auditory word recognition. Bilingual advantages in inhibitory control might stem from the need to resolve perceptual competition between similar-sounding words both within and between their two languages. If so, these advantages should be lessened or eliminated when there is no perceptual competition between two languages. The present study investigated the extent of inhibitory control recruitment during bilingual language comprehension by examining associations between language co-activation and nonlinguistic inhibitory control abilities in bimodal bilinguals, whose two languages do not perceptually compete. Cross-linguistic distractor activation was identified in the visual world paradigm, and correlated significantly with performance on a nonlinguistic spatial Stroop task within a group of 27 hearing ASL-English bilinguals. Smaller Stroop effects (indexing more efficient inhibition) were associated with reduced co-activation of ASL signs during the early stages of auditory word recognition. These results suggest that the role of inhibitory control in auditory word recognition is not limited to resolving perceptual linguistic competition in phonological input, but is also used to moderate competition that originates at the lexico-semantic level.
Bimodal bilinguals, fluent in a signed and a spoken language, exhibit a unique form of bilingualism because their two languages access distinct sensory-motor systems for comprehension and production. Differences between unimodal and bimodal bilinguals have implications for how the brain is organized to control, process, and represent two languages. Evidence from code-blending (simultaneous production of a word and a sign) indicates that the production system can access two lexical representations without cost, and the comprehension system must be able to simultaneously integrate lexical information from two languages. Further, evidence of cross-language activation in bimodal bilinguals indicates the necessity of links between languages at the lexical or semantic level. Finally, the bimodal bilingual brain differs from the unimodal bilingual brain with respect to the degree and extent of neural overlap for the two languages, with less overlap for bimodal bilinguals.
We used picture–word interference (PWI) to discover a) whether cross-language activation at the lexical level can yield phonological priming effects when languages do not share phonological representations, and b) whether semantic interference effects occur without articulatory competition. Bimodal bilinguals fluent in American Sign Language (ASL) and English named pictures in ASL while listening to distractor words that were 1) translation equivalents, 2) phonologically related to the target sign through translation, 3) semantically related, or 4) unrelated. Monolingual speakers named pictures in English. Production of ASL signs was facilitated by words that were phonologically related through translation and by translation equivalents, indicating that cross-language activation spreads from lexical to phonological levels for production. Semantic interference effects were not observed for bimodal bilinguals, providing some support for a post-lexical locus of semantic interference, but which we suggest may instead reflect time course differences in spoken and signed production in the PWI task.
The effect of using signed communication on the spoken language development of deaf children with a cochlear implant (CI) is much debated. We report on two studies that investigated relationships between spoken word and sign processing in children with a CI who are exposed to signs in addition to spoken language. Study 1 assessed rapid word and sign learning in 13 children with a CI and found that performance in both language modalities correlated positively. Study 2 tested the effects of using sign-supported speech on spoken word processing in eight children with a CI, showing that simultaneously perceiving signs and spoken words does not negatively impact their spoken word recognition or learning. Together, these two studies suggest that sign exposure does not necessarily have a negative effect on speech processing in some children with a CI.
Despite their different auditory input, children with a CI appear to be able to use many acoustic cues effectively in speech perception. Most importantly, children with a CI and normal-hearing children were observed to use similar cue-weighting patterns.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.