2017
DOI: 10.1044/2016_jslhr-h-16-0101
|View full text |Cite
|
Sign up to set email alerts
|

Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension

Abstract: Purpose: This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Method: Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

26
180
4

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 109 publications
(210 citation statements)
references
References 41 publications
26
180
4
Order By: Relevance
“…Participants were presented with 160 short video clips of a female actress who uttered a Dutch action verb, which would be accompanied by an iconic gesture or no gesture. These video clips were originally used in a previous behavioral experiment in Drijvers and Ozyürek (), where pretests and further details of the stimuli can be found.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Participants were presented with 160 short video clips of a female actress who uttered a Dutch action verb, which would be accompanied by an iconic gesture or no gesture. These video clips were originally used in a previous behavioral experiment in Drijvers and Ozyürek (), where pretests and further details of the stimuli can be found.…”
Section: Methodsmentioning
confidence: 99%
“…As such, these gestures resembled those in natural speech production, as they were meant to be understood in the context of speech, but not as pantomimes which can be fully understood without speech. We investigated the recognizability of all our iconic gestures outside a context of speech by presenting participants with all video clips without any audio, and asked them to name a verb that depicted the video (as part of Drijvers & Ozyürek, ). We coded answers as “correct” when a correct answer or a synonym was given in relation to the verb each iconic gesture was produced with by the actor, and as “incorrect” when the verb was unrelated.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Given that the intonational contrasts were inherently more difficult to auditorily judge than length contrasts (as the error rates attest), perhaps participants were more likely to turn to gesture when they had difficulty hearing the intonational information. This make sense given research on native language processing showing that people rely on visual information more when they struggle to process auditory information (Sumby & Pollack, 1954; and specifically for hand gesture, Drijvers & Özyürek, 2016;Obermeier, Dolk, & Gunter, 2012). Although the present study cannot definitely determine whether the second or third account is correct, the results still make a novel contribution to the literature: In contrast to recent findings showing that metaphoric length gestures do not help with learning vowel length distinctions (Hirata & Kelly, 2010;Hirata et al, 2014), we have shown that metaphoric intonation (pitch) gestures do help non-native speakers process phonemic intonational information in FL speech (see also, Hannah et al, 2016).…”
Section: Intonational Contrastsmentioning
confidence: 99%
“…The overall mean nameability index of the final stimulus set of 32 gestures was 164 49%. This indicates that as a whole, the stimulus set is best characterized as containing iconic 165 gestures, which are characterised by a certain ambiguity when presented in the absence of 166 speech (Hadar and Pinchas-Zamir, 2004;Drijvers and Özyürek, 2017). 167…”
mentioning
confidence: 99%