This paper examines gestures that simultaneously express multiple physical perspectives, known as dual viewpoint gestures. These gestures were first discussed in McNeill's 1992 book, Hand and mind. We examine a corpus of approximately fifteen hours of narrative data, and use these data to extend McNeill's observations about the different possibilities for combining viewpoints. We also show that a phenomenon thought to be present only in the narrations of children is present in the narrations of adults. We discuss the significance of these gestures for theories of speech-gesture integration.
Dual viewpoint gesturesThis paper examines hand and body gestures that simultaneously express multiple perspectives on an event or scene. These gestures, known as dual viewpoint gestures, suggest that a speaker is taking multiple spatial perspectives on a scene at the same time. Despite the fact that this is a rather impressive cognitive feat, relatively little has been written about the phenomenon (though see McClave, 2000). In an effort to provide systematic data on dual viewpoint gestures, we examine a corpus of approximately fifteen hours of narrative data (containing over four thousand gestures). We use these data to extend previous descriptions of the ways in which viewpoint can be combined in gesture. We also show that a phenomenon thought to be present only in the narrations of children is in fact present in the narrations of adults. The paper is organized as follows. We first explain what it means for gesture to express viewpoint, commenting on different uses of the terms viewpoint and perspective. We then describe dual viewpoint gestures, and provide examples from our corpus. We end with some open questions that may serve as starting points for future research.
Gestural viewpoint research suggests that several dimensions determine which perspective a narrator takes, including properties of the event described. Events can evoke gestures from the point of view of a character (CVPT), an observer (OVPT), or both perspectives. CVPT and OVPT gestures have been compared to constructed action (CA) and classifiers (CL) in signed languages. We ask how CA and CL, as represented in ASL productions, compare to previous results for CVPT and OVPT from English-speaking co-speech gesturers. Ten ASL signers described cartoon stimuli from Parrill (2010). Events shown by Parrill to elicit a particular gestural strategy (CVPT, OVPT, both) were coded for signers' instances of CA and CL. CA was divided into three categories: CA-torso, CA-affect, and CA-handling. Signers used CA-handling the most when gesturers used CVPT exclusively. Additionally, signers used CL the most when gesturers used OVPT exclusively and CL the least when gesturers used CVPT exclusively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.