Infants growing up in bilingual homes learn two languages simultaneously without apparent confusion or delay. However, the mechanisms that support this remarkable achievement remain unclear. Here, we demonstrate that infants use language-control mechanisms to preferentially activate the currently heard language during listening. In a naturalistic eye-tracking procedure, bilingual infants were more accurate at recognizing objects labeled in same-language sentences ("Find the dog!") than in switched-language sentences ("Find the !"). Measurements of infants' pupil size over time indicated that this resulted from increased cognitive load during language switches. However, language switches did not always engender processing difficulties: the switch cost was reduced or eliminated when the switch was from the nondominant to the dominant language, and when it crossed a sentence boundary. Adults showed the same patterns of performance as infants, even though target words were simple and highly familiar. Our results provide striking evidence from infancy to adulthood that bilinguals monitor their languages for efficient comprehension. Everyday practice controlling two languages during listening is likely to explain previously observed bilingual cognitive advantages across the lifespan.
A talking face provides redundant cues on the mouth that might support language learning and highly salient social cues in the eyes. What drives children's looking towards the mouth versus eyes of a talking face? This study reports data from 292 children who viewed faces speaking English, French, and Russian. We investigated the impact of children's age (5 months to 5 years) and language background (monolingual English, monolingual French, bilingual English-French), and the speaker's language (dominant, non-dominant, or non-native) relative to children's native language(s). Data from 129 bilingual adults were also collected for comparison.Five-month-olds showed balanced attention to the eyes and mouth, but children up to 5 years tended to be most interested in the mouth. In contrast, adults were most interested in the eyes. We found little evidence for different patterns of attention for monolinguals versus bilinguals, or to a native versus a non-native speaker. Using percentile scores, monolinguals with larger productive vocabularies looked more at the mouth, while bilinguals with larger comprehension vocabularies looked marginally less at the mouth, although both effects were small and not as robust with raw vocabulary scores. Children showed large but stable individual variability in their face scanning patterns across different speakers. Our results show that the way that children allocate their attention to talking faces continues to change from infancy through the preschool years and beyond. Future studies will need to go beyond looking at bilingualism, speaker language, and vocabulary size to understand what drives children's in-the-moment attention to talking faces.
This document is the Accepted Manuscript Version of the following article: Tania S. Zamuner, Elizabeth Morin-Lessard, Stephanie Strahm, and Michael P. A. Page, 'Soke word recognition of novel words, either produced or only heard during learning', Journal of Memory and Language, Vol. 89, August 2016, pp. 55-67, doi: 10.1016/j.jml.2015.10.003. Under embargo. Embargo end date: 1 December 2017. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/Psycholinguistic models of spoken word production differ in how they conceptualize the relationship between lexical, phonological and output representations, making different predictions for the role of production in language acquisition and language processing. This work examines the impact of production on spoken word recognition of newly learned non-words. In Experiment 1, adults were trained on non-words with visual referents; during training, they produced half of the non-words, with the other half being heard-only. Using a visual world paradigm at test, eye tracking results indicated faster recognition of non-words that were produced compared with heard-only during training. In Experiment 2, non-words were correctly pronounced or mispronounced at test. Participants showed a different pattern of recognition for mispronunciation on non-words that were produced compared with heard-only during training. Together these results indicate that production affects the representations of newly learned words
This research investigates the effect of production on 4.5- to 6-year-old children's recognition of newly learned words. In Experiment 1, children were taught four novel words in a produced or heard training condition during a brief training phase. In Experiment 2, children were taught eight novel words, and this time training condition was in a blocked design. Immediately after training, children were tested on their recognition of the trained novel words using a preferential looking paradigm. In both experiments, children recognized novel words that were produced and heard during training, but demonstrated better recognition for items that were heard. These findings are opposite to previous results reported in the literature with adults and children. Our results show that benefits of speech production for word learning are dependent on factors such as task complexity and the developmental stage of the learner.
In bilingual language environments, infants and toddlers listen to two separate languages during the same key years that monolingual children listen to just one and bilinguals rarely learn each of their two languages at the same rate. Learning to understand language requires them to cope with challenges not found in monolingual input, notably the use of two languages within the same utterance (e.g., Do you like the perro? or ¿Te gusta el doggy?). For bilinguals of all ages, switching between two languages can reduce the efficiency in real‐time language processing. But language switching is a dynamic phenomenon in bilingual environments, presenting the young learner with many junctures where comprehension can be derailed or even supported. In this study, we tested 20 Spanish–English bilingual toddlers (18‐ to 30‐months) who varied substantially in language dominance. Toddlers’ eye movements were monitored as they looked at familiar objects and listened to single‐language and mixed‐language sentences in both of their languages. We found asymmetrical switch costs when toddlers were tested in their dominant versus non‐dominant language, and critically, they benefited from hearing nouns produced in their dominant language, independent of switching. While bilingualism does present unique challenges, our results suggest a united picture of early monolingual and bilingual learning. Just like monolinguals, experience shapes bilingual toddlers’ word knowledge, and with more robust representations, toddlers are better able to recognize words in diverse sentences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.