Handbücher Zur Sprach- Und Kommunikationswissenschaft / Handbooks of Linguistics and Communication Science (HSK) 38/1 2013
DOI: 10.1515/9783110261318.837
|View full text |Cite
|
Sign up to set email alerts
|

52. Experimental methods in co-speech gesture research

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 89 publications
(124 reference statements)
0
2
0
Order By: Relevance
“…However, although several projects have developed multimodal data annotated with gestural information (for a discussion, see Wagner et al, 2014), few of these efforts have yet resulted in freely available annotated corpora for multiple languages such as in Koutsombogera and Vogel (2018). In parallel with this work, the linguistics community interested in co-speech gesture has created and annotated many multimodal data collections, but typically for experimental purposes and thus in very specific contexts (Holler, 2013).…”
Section: Introductionmentioning
confidence: 99%
“…However, although several projects have developed multimodal data annotated with gestural information (for a discussion, see Wagner et al, 2014), few of these efforts have yet resulted in freely available annotated corpora for multiple languages such as in Koutsombogera and Vogel (2018). In parallel with this work, the linguistics community interested in co-speech gesture has created and annotated many multimodal data collections, but typically for experimental purposes and thus in very specific contexts (Holler, 2013).…”
Section: Introductionmentioning
confidence: 99%
“…To illustrate, when asking "Can you pass me the coffee pot?," a speaker might gesture in a way that reflects the size and shape of the coffee pot, or the action of pouring coffee. Although there is evidence that iconic gestures facilitate language comprehension (see Dargue et al, 2019;Holler, 2013;Hostetter, 2011;Özyürek, 2009, for metaanalyses and reviews), little is known about the processing of iconic gestures in contexts where the entities being referred are, along with the speaker, in the listener's visual field. The aim of the present study is to explore how listeners allocate attention to scene elements in this context, whether gesture cues can facilitate comprehension, and whether these patterns change according to the ease or difficulty of understanding the auditory signal.…”
mentioning
confidence: 99%