Bilinguals perceptually accommodate speech variation across languages, but to what extent this flexibility depends on bilingual experience is uncertain. One account suggests that bilingual experience promotes language-specific processing modes, implying that bilinguals can switch as appropriate between the different phonetic systems of the languages they speak. Another account suggests that bilinguals rapidly recalibrate to the unique acoustic properties of each language following language-general processes common to monolinguals. Challenging this latter account, the present results show that Spanish-English bilinguals with exposure to both languages from early childhood, but not English monolinguals, shift perception as appropriate across acoustically controlled English and Spanish contexts. Early bilingual experience appears to promote language-specific phonetic systems.
Bilinguals understand when the communication context calls for speaking a particular language and can switch from speaking one language to the other based on such conceptual knowledge. There is disagreement regarding whether conceptually-based language switching is also possible in the listening modality. For example, can bilingual listeners perceptually adjust to changes in pronunciation across languages based on their conceptual understanding of which language they're currently hearing? We asked French-and Spanish-English bilinguals to identify nonsense monosyllables as beginning with /b/ or /p/, speech categories that French and Spanish speakers pronounce differently than English speakers. We conceptually cued each bilingual group to one of their two languages or the other by explicitly instructing them that the speech items were word onsets in that language, uttered by a native speaker thereof. Both groups adjusted their /b-p/ identification boundary in accordance with this conceptual cue to the language context. These results support a bilingual model permitting conceptually-based language selection on both the speaking and listening end of a communicative exchange.
Infants might be better at teasing apart dialects with different language rules when hearing the dialects at different times, since language learners do not always combine input heard at different times. However, no previous research has independently varied the temporal distribution of conflicting language input. Twelve-month-olds heard two artificial language streams representing different dialects—a “pure stream” whose sentences adhered to abstract grammar rules like aX bY, and a “mixed stream” wherein any a- or b-word could precede any X- or Y-word. Infants were then tested for generalization of the pure stream’s rules to novel sentences. Supporting our hypothesis, infants showed generalization when the two streams’ sentences alternated in minutes-long intervals without any perceptually salient change across streams (Experiment 2), but not when all sentences from these same streams were randomly interleaved (Experiment 3). Results are interpreted in light of temporal context effects in word learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.