In two experiments with Telugu–English bilinguals, we examined if bilingual speakers are sensitive towards an interlocutor's (cartoon) relative language proficiency when they voluntarily selected a language for object naming. After familiarization with four different cartoons with varied L2 proficiency, participants did a voluntary naming task. In Experiment 1, participants explicitly indicated their choice of language before naming objects. In Experiment 2, participants named the objects directly. In both experiments, language choices and switchrates were thoroughly modulated by the participants’ perceived linguistic ability of the cartoon. However, awareness of perceived proficiency of the cartoons did not modulate naming latency. These results provide strong support for the adaptive control hypothesis, showing that bilingual speakers are sensitive to their interlocutor's language needs and this influences how they plan their language use. The results provide evidence of speakers taking into consideration the language proficiency of interlocutors, suggesting extreme adaptability of the bilingual mind.
Two experiments using the visual-world paradigm examined whether culture-specific images influence the activation of translation equivalents during spoken-word recognition in bilinguals. In Experiment 1, the participants performed a visual-world task during which they were asked to click on the target after the spoken word (L1 or L2). In Experiment 2, the participants were presented with culture-specific images (faces representing L1, L2 and Neutral) during the visual world task. Time-course analysis of Experiment 1 revealed that there were a significantly higher number of looks to TE-cohort member compared to distractors only when participants heard to L2 words. In Experiment 2, when the cultural-specific images were congruent with the spoken word’s language, participants deployed higher number of looks to TE-cohort member compared to distractors. This effect was seen in both the language directions but not when the culture-specific images were incongruent with the spoken word. The eyetracking data suggest that culture-specific images influence cross-linguistic activation of semantics during bilingual audio-visual language processing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.