When people are engaged in social interaction, they often repeat each other’s communicative behavior, such as words or gestures. This kind of alignment has been studied across a wide range of disciplines and has been accounted for by diverging theories. In this paper, we review various operationalizations of lexical and gestural alignment. We reveal that scholars have fundamentally different takes on when and how behavior is considered to be aligned, which makes it difficult to compare findings and draw uniform conclusions. Furthermore, we show that scholars tend to focus on one particular dimension of alignment (traditionally, whether two instances of behavior overlap in form), yet underspecify, conflate or neglect other dimensions. This stands in the way of proper theory testing and building, which requires a well-defined account of the factors that are central to or might enhance alignment. To capture the complex nature of alignment, we identify five key dimensions to formalize the relationship between any pair of behavior: sequence, time, semantics, form and modality. We show how assumptions regarding the underlying mechanism of alignment (categorized into priming versus grounding) pattern together with the operationalization in terms of the five dimensions. This conceptual framework can help researchers in the field of alignment and related phenomena (including behavior matching, mimicry, entrainment and accommodation) to formulate their hypotheses and operationalizations in a more transparent and systematic manner. The framework also enables us to discover unexplored research avenues and derive new hypotheses from existing theories.
Most manual communicative gestures that humans produce cannot be looked up in a dictionary, as these manual gestures inherit their meaning in large part from the communicative context and are not conventionalized. However, it is understudied to what extent the communicative signal as such — bodily postures in movement, or kinematics — can inform about gesture semantics. Can we construct, in principle, a distribution-based semantics of gesture kinematics, similar to how word vectorization methods in NLP (Natural language Processing) are now widely used to study semantic properties in text and speech? For such a project to get off the ground, we need to know the extent to which semantically similar gestures are more likely to be kinematically similar. In study 1 we assess whether semantic word2vec distances between the conveyed concepts participants were explicitly instructed to convey in silent gestures, relate to the kinematic distances of these gestures as obtained from Dynamic Time Warping (DTW). In a second director-matcher dyadic study we assess kinematic similarity between spontaneous co-speech gestures produced between interacting participants. Participants were asked before and after they interacted how they would name the objects. The semantic distances between the resulting names were related to the gesture kinematic distances of gestures that were made in the context of conveying those objects in the interaction. We find that the gestures’ semantic relatedness is reliably predictive of kinematic relatedness across these highly divergent studies, which suggests that the development of an NLP method of deriving semantic relatedness from kinematics is a promising avenue for future developments in automated multimodal recognition. Deeper implications for statistical learning processes in multimodal language are discussed.
In human communication, social intentions and meaning are often revealed in the way we move. In this study, we investigate the flexibility of human communication in terms of kinematic modulation in a clinical population, namely, autistic individuals. The aim of this study was twofold: to assess 1) whether communicatively relevant kinematic features of gestures differ between autistic and neurotypical individuals, and 2) if autistic individuals use communicative kinematic modulation to support gesture recognition. We tested autistic and neurotypical individuals on a silent gesture production task and a gesture comprehension task. We measured movement during the gesture production task using a Kinect motion tracking device in order to determine if autistic individuals differed from neurotypical individuals in their gesture kinematics. For the gesture comprehension task, we utilized stick-light figures as stimuli and, by testing for a correlation between the kinematics of these videos and recognition performance, we assessed whether autistic individuals used communicatively relevant kinematic cues to support recognition. We found that 1) silent gestures produced by autistic and neurotypical individuals differ in communicatively relevant kinematic features, such as the number of meaningful holds between movements, and 2) while autistic individuals are overall unimpaired at recognizing gestures, they processed repetition and complexity, measured as the amount of submovements perceived, different than neurotypicals do. These findings highlight how subtle aspects of neurotypical behavior can be experienced differently by autistic individuals, and demonstrate the relationship between movement kinematics and social interaction in high-functioning autistic individuals.
Reverse engineering how language emerged is a daunting interdisciplinary project. Experimental cognitive science has contributed to this effort by eliciting in the lab constraints likely playing a role for language emergence; constraints such as iterated transmission of communicative tokens between agents. Since such constraints played out over long phylogenetic time and involved vast populations, a crucial challenge for iterated language learning paradigms is to extend its limits. In the current approach we perform a multiscale quantification of kinematic changes of an evolving silent gesture system. Silent gestures consist of complex multi-articulatory movement that have so far proven elusive to quantify in a structural and reproducable way, and is primarily studied through human coders meticulously interpreting the referential content of gestures. Here we reanalyzed video data from a silent gesture iterated learning experiment (Motamedi et al. 2019), which originally showed increases in systematicity of gestural form over language transmissions. We applied a signal-based approach, first utilizing computer vision techniques to quantify kinematics from videodata. Then we performed a multiscale kinematic analysis showing that over generations of language users, silent gestures became more efficient and less complex in their kinematics. We further detect systematicity of the communicative tokens’s interrelations which proved itself as a proxy of systematicity obtained via human observation data. Thus we demonstrate the potential for a signal-based approach of language evolution in complex multi-articulatory gestures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.