a b s t r a c tIn dialogue, repeated references contain fewer words (which are also acoustically reduced) and fewer gestures than initial ones. In this paper, we describe three experiments studying to what extent gesture reduction is comparable to other forms of linguistic reduction. Since previous studies showed conflicting findings for gesture rate, we systematically compare two measures of gesture rate: gesture rate per word and per semantic attribute (Experiment I). In addition, we ask whether repetition impacts the form of gestures, by manual annotation of a number of features (Experiment I), by studying gradient differences using a judgment test (Experiment II), and by investigating how effective initial and repeated gestures are at communicating information (Experiment III). The results revealed no reduction in terms of gesture rate per word, but a U-shaped reduction pattern for gesture rate per attribute. Gesture annotation showed no reliable effects of repetition on gesture form, yet participants judged gestures from repeated references as less precise than those from initial ones. Despite this gradient reduction, gestures from initial and repeated references were equally successful in communicating information. Besides effects of repetition, we found systematic effects of visibility on gesture production, with more, longer, larger and more communicative gestures when participants could see each other. We discuss the implications of our findings for gesture research and for models of speech and gesture production.
Do people speak differently when they cannot use their hands? Previous studies have suggested that speech becomes less fluent and more monotonous when speakers cannot gesture, but the evidence for this claim remains inconclusive. The present study attempts to find support for this claim in a production experiment in which speakers had to give addressees instructions on how to tie a tie; half of the participants had to perform this task while sitting on their hands. Other factors that influence the ease of communication, such as mutual visibility and previous experience, were also taken into account. No evidence was found for the claim that the inability to gesture affects speech fluency or monotony. An additional perception task showed that people were also not able to hear whether someone gestures or not.
The linguistic metaphors of time appear to influence how people gesture about time. This study finds that Chinese English bilinguals produce more vertical gestures when talking about Chinese time references with vertical spatial metaphors than (1) when talking about time conceptions in the English translations, and (2) when talking about Chinese time references with no spatial metaphors. Additionally, Chinese English bilinguals prefer vertical gestures to lateral gestures when perceiving Chinese time references with vertical spatial metaphors and the corresponding English translations, whereas there is no such preference when perceiving time references without spatial metaphors. Furthermore, this vertical tendency is not due to the fact that vertical gestures are generally less ambiguous than lateral gestures for addressees. In conclusion, the vertical gesturing about time by Chinese English bilinguals is shaped by both the stable language-specific conceptualisations, and the online changes in linguistic choices. ARTICLE HISTORY
Speech perception is multimodal, with not only speech, but also gesture presumably playing a role in how a message is perceived. However, there have not been many studies on the effect that hand gestures may have on speech perception in general, and on persuasive speech in particular. Moreover, we do not yet know whether an effect of gestures may be larger when addressees are not involved in the topic of the discourse, and are therefore more focused on peripheral cues, rather than the content of the message. In the current study participants were shown a speech with or without gestures. Some participants were involved in the topic of the speech, others were not. We studied five measures of persuasiveness. Results showed that for all but one measure, viewing the video with accompanying gestures made the speech more persuasive. In addition, there were several interactions, showing that the performance of the speaker and the factual accuracy of the speech scored high especially for those participants who not only saw gestures but were also not involved in the topic of the speech.
On what happens in gesture when communication is unsuccessfulHoetjes, Marieke; Krahmer, Emiel; Swerts, Marc General rightsCopyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.-Users may download and print one copy of any publication from the public portal for the purpose of private study or research -You may not further distribute the material or use it for any profit-making activity or commercial gain -You may freely distribute the URL identifying the publication in the public portal Take down policyIf you believe that this document breaches copyright, please contact us providing details, and we will remove access to the work immediately and investigate your claim. AbstractPrevious studies found that repeated references in successful communication are often reduced, not only at the acoustic level, but also in terms of words and manual co-speech gestures. In the present study, we investigated whether repeated references are still reduced in a situation when reduction would not be beneficial for the communicative situation, namely after the speaker receives negative feedback from the addressee. In a director-matcher task (experiment I), we studied gesture rate, as well as the general form of the gestures produced in initial and repeated references. In a separate experiment (experiment II) we studied whether there might (also) be more gradual differences in gesture form between gestures in initial and repeated references, by asking human judges which of two gestures (one from an initial and one from a repeated reference following negative feedback) they considered more precise. In both experiments, mutual visibility was added as a between subjects factor. Results showed that after negative feedback, gesture rate increased in a marginally significant way. With regard to gesture form, we found little evidence for changes in gesture form after negative feedback, except for a marginally significant increase of the number of repeated strokes within a gesture. Lack of mutual visibility only had a significant reducing effect on gesture size, and did not interact with repetition in any way. However, we did find gradual differences in gesture form, with gestures produced after negative feedback being judged as marginally more precise than initial gestures. The results from the present study suggest that in the production of unsuccessful repeated references, a process different from the reduction process as found in previous studies in repeated references takes place, with speakers appearing to put more effort into their gestures after negative feedback, as suggested by the data trending towards an increased gesture rate and towards gestures being judged as more precise after feedback.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.