Most research on how children learn the mapping between words and world has assumed that language is arbitrary, and has investigated language learning in contexts in which objects referred to are present in the environment. Here, we report analyses of a semi-naturalistic corpus of caregivers talking to their 2-3 year-old. We focus on caregivers’ use of non-arbitrary cues across different expressive channels: both iconic (onomatopoeia and representational gestures) and indexical (points and actions with objects). We ask if these cues are used differently when talking about objects known or unknown to the child, and when the referred objects are present or absent. We hypothesize that caregivers would use these cues more often with objects novel to the child. Moreover, they would use the iconic cues especially when objects are absent because iconic cues bring to the mind’s eye properties of referents. We find that cue distribution differs: all cues except points are more common for unknown objects indicating their potential role in learning; onomatopoeia and representational gestures are more common for displaced contexts whereas indexical cues are more common when objects are present. Thus, caregivers provide multimodal non-arbitrary cues to support children’s vocabulary learning and iconicity – specifically – can support linking mental representations for objects and labels.
In the last decade, a growing body of work has convincingly demonstrated that languages embed a certain degree of non-arbitrariness (mostly in the form of iconicity, namely the presence of imagistic links between linguistic form and meaning). Most of this previous work has been limited to assessing the degree (and role) of non-arbitrariness in the speech (for spoken languages) or manual components of signs (for sign languages). When approached in this way, non-arbitrariness is acknowledged but still considered to have little presence and purpose, showing a diachronic movement towards more arbitrary forms. However, this perspective is limited as it does not take into account the situated nature of language use in face-to-face interactions, where language comprises categorical components of speech and signs, but also multimodal cues such as prosody, gestures, eye gaze etc. We review work concerning the role of context-dependent iconic and indexical cues in language acquisition and processing to demonstrate the pervasiveness of non-arbitrary multimodal cues in language use and we discuss their function. We then move to argue that the online omnipresence of multimodal non-arbitrary cues supports children and adults in dynamically developing situational models.
How children learn words continues to be an unanswered question in the study of human development. A large body of work has provided insight into a number of key strategies that can support word learning, including children's sensitivity to temporal co-occurrence between labels and objects (
In the last decade, a growing body of work has convincingly demonstrated that languages embed a certain degree of non-arbitrariness (mostly in the form of iconicity, namely the presence of imagistic links between linguistic form and meaning). Most of this previous work has been limited to assessing the degree (and role) of non-arbitrariness in the speech (for spoken languages) or manual components of signs (for sign languages). When approached in this way, non-arbitrariness is acknowledged but still considered to have little presence andpurpose, showing a diachronic movement towards more arbitrary forms. However, this perspective is limited as it does not take into account the situated nature of language use in face-to-face interactions, where language comprises categorical components of speech and signs, but also multimodal cues such as prosody, gestures, eye gaze etc. We review work concerning the role of context-dependent iconic and indexical cues in language acquisition and processing to demonstrate the pervasiveness of non-arbitrary multimodal cues in language use and we discuss their function. We then move to argue that the online omnipresence of multimodal non-arbitrary cues supports children and adults in dynamically developing situational models.
Most language use is displaced, referring to past, future or hypothetical events. Displacement poses an important challenge for language learning. How can children learn what words refer to when the referent is not physically available? We suggest that caregivers provide children with iconic vocal and gestural cues that imagistically evoke properties of absent referents to support displaced learning. We collected an audio-visual corpus of English-speaking caregiver-child interactions (N = 71, 24-58 months, 37 female) and annotated the range of vocal and manual behaviours caregivers produced. We found that caregivers used iconic cues especially in displaced contexts, using other cues when objects were present. Thus, we map caregivers’ non-linguistic behaviours, showing that they provide iconic cues to support displaced language learning and processing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.