Face-to-face communication is multimodal at its core: it consists of a combination of vocal and visual signalling. However, current evidence suggests that, in the absence of an established communication system, visual signalling, especially in the form of visible gesture, is a more powerful form of communication than vocalization and therefore likely to have played a primary role in the emergence of human language. This argument is based on experimental evidence of how vocal and visual modalities (i.e. gesture) are employed to communicate about familiar concepts when participants cannot use their existing languages. To investigate this further, we introduce an experiment where pairs of participants performed a referential communication task in which they described unfamiliar stimuli in order to reduce reliance on conventional signals. Visual and auditory stimuli were described in three conditions: using visible gestures only, using non-linguistic vocalizations only and given the option to use both (multimodal communication). The results suggest that even in the absence of conventional signals, gesture is a more powerful mode of communication compared with vocalization, but that there are also advantages to multimodality compared to using gesture alone. Participants with an option to produce multimodal signals had comparable accuracy to those using only gesture, but gained an efficiency advantage. The analysis of the interactions between participants showed that interactants developed novel communication systems for unfamiliar stimuli by deploying different modalities flexibly to suit their needs and by taking advantage of multimodality when required.
While studies of language evolution have themselves evolved to include interaction as a feature of interest (Healey et al, 2007;Tamariz et al, 2017;Fay et al, 2017; Byun et al, in press), many still fail to consider just what interaction offers emerging communication systems. That is, while it's been acknowledged that face-to-face interaction in communication games is beneficial in its approximation of natural language use (Macuch Silva & Roberts, 2016;Nölle et al, 2017), there remains a lack of detailed analysis of what this type of interaction affords participants, and how those affordances impact the evolving language. To this end, here we will expose one particular process that occurs in interaction: repair, or the processes by which we can indicate misunderstanding and resolve problems in communication (Schegloff, Jefferson, & Sacks, 1977;Jefferson, 1972). Though it is often not explicitly analyzed, repair is a relevant aspect of interaction to consider for its effects on the evolution of a communication system as well as how it demonstrates the moment-to-moment processing and negotiation of alignment in emerging communication.We present data from various studies of language evolution in which we document how repair is carried out, the types of repair present, and their effect on novel signaling. All studies in this collection utilized referential communication tasks -some iterated over simulated generations and other repeating interactions between two individuals. However, they differ in modality (of stimuli and communication). The data collection includes: silent gesture communication of written nouns and verbs; non-linguistic vocalizations and This paper is distributed under a Creative Commons CC-BY-ND license.
Previous research in cognitive science and psycholinguistics has shown that language users are able to predict upcoming linguistic input probabilistically, pre-activating material on the basis of cues emerging from different levels of linguistic abstraction, from phonology to semantics. Current evidence suggests that linguistic prediction also operates at the level of pragmatics, where processing is strongly constrained by context. To test a specific theory of contextually-constrained processing, termed pragmatic surprisal theory here, we used a self-paced reading task where participants were asked to view visual scenes and then read descriptions of those same scenes. Crucially, we manipulated whether the visual context biased readers into specific pragmatic expectations about how the description might unfold word by word. Contrary to the predictions of pragmatic surprisal theory, we found that participants took longer reading the main critical term in scenarios where they were biased by context and pragmatic constraints to expect a given word, as opposed to scenarios where there was no pragmatic expectation for any particular referent.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.