The movements that we make with our body vary continuously along multiple dimensions. However, many of the tools and techniques presently used for coding and analyzing hand gestures and other body movements yield categorical outcome variables. Focusing on categorical variables as the primary quantitative outcomes may mislead researchers or distort conclusions. Moreover, categorical systems may fail to capture the richness present in movement. Variations in body movement may be informative in multiple dimensions. For example, a single hand gesture has a unique size, height of production, trajectory, speed, and handshape. Slight variations in any of these features may alter how both the speaker and the listener are affected by gesture. In this paper, we describe a new method for measuring and visualizing the physical trajectory of movement using video. This method is generally accessible, requiring only video data and freely available computer software. This method allows researchers to examine features of hand gestures, body movement, and other motion, including size, height, curvature, and speed. We offer a detailed account of how to implement this approach, and we also offer some guidelines for situations where this approach may be fruitful in revealing how the body expresses information. Finally, we provide data from a small study on how speakers alter their hand gestures in response to different characteristics of a stimulus to demonstrate the utility of analyzing continuous dimensions of motion. By creating shared methods, we hope to facilitate communication between researchers from varying methodological traditions.
Communication is shaped both by what we are trying to say and by whom we are saying it to. We examined whether and how shared information influences the gestures speakers produce along with their speech. Unlike prior work examining effects of common ground on speech and gesture, we examined a situation in which some speakers have the same amount of mutually shared experience with their listener but the relevance of the information from shared experience is different for listeners in different conditions. Additionally, speakers and listeners in all conditions shared a visual perspective. Speakers and listeners solved a version of the Tower of Hanoi task together. Speakers then solved a second version of the task without the listener present with the manner of disk movement manipulated; the manner was either the same as or different from the version that had been solved with the listener present. Thus, speakers' knowledge of the relevance of shared knowledge was manipulated. We measured the content of speech along with the physical form and content of the accompanying hand gesture. Although speakers did not modulate their spoken language, speakers who knew their listeners had not previously experienced the appropriate manner of completion gestured higher in space, highlighting manner information, but without altering the physical gesture trajectory. Thus, gesture can be sensitive to the knowledge of listeners even when speech is not. Speakers' gestures can play an independent role in reflecting common ground between speakers and listeners, perhaps by simultaneously incorporating both speaker and listener perspectives. (PsycINFO Database Record
Theories of lexical production differ in whether they allow phonological processes to affect lexical selection directly. Whereas some accounts, such as interactive activation accounts, predict (weak) early effects of phonological processes during lexical selection via feedback connections, strictly serial architectures do not make this prediction. We present evidence from lexical selection during unscripted sentence production that lexical selection is affected by the phonological form of recently produced words. In a video description experiment, participants described scenes that were compatible with several near-meaning-equivalent verbs. We found that speakers were less likely than expected by chance to select a verb form that would result in phonological onset overlap with the subject of the sentence. Additional evidence from the distribution of disfluencies immediately preceding the verb argues that this effect is due to early effects on lexical selection, rather than later corrective processes, such as self-monitoring. Taken together, these findings support accounts that allow early feedback from phonological processes to word-level nodes, even during lexical selection.
We investigate phonological encoding during unscripted sentence production, focusing on the effect of phonological overlap on phonological encoding. Previous work on this question has almost exclusively employed isolated word production or highly scripted multi-word production. These studies have led to conflicting results: some studies found that phonological overlap between two words facilitates phonological encoding, while others found inhibitory effects. One worry with many of these paradigms is that they involve processes that are not typical to everyday language use, which calls into question to what extent their findings speak to the architectures and mechanisms underlying language production. We present a paradigm to investigate the consequences of phonological overlap between words in a sentence while leaving speakers much of the lexical and structural choices typical in everyday language use. Adult native speakers of English described events in short video clips. We annotated the presence of disfluencies and the speech rate at various points throughout the sentence, as well as the constituent order. We find that phonological overlap has an inhibitory effect on phonological encoding. Specifically, if adjacent content words share their phonological onset (e.g., hand the hammer), they are preceded by production difficulty, as reflected in fluency and speech rate. We also find that this production difficulty affects speakers’ constituent order preferences during grammatical encoding. We discuss our results and previous works to isolate the properties of other paradigms that resulted in facilitatory or inhibitory results. The data from our paradigm also speak to questions about the scope of phonological planning in unscripted speech and as to whether phonological and grammatical encoding interact.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.