2016
DOI: 10.3758/s13414-016-1109-4
|View full text |Cite
|
Sign up to set email alerts
|

High visual resolution matters in audiovisual speech perception, but only for some

Abstract: The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
11
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(12 citation statements)
references
References 63 publications
1
11
0
Order By: Relevance
“…When monologues were presented in high levels of background noise including music and multilingual talkers, participants looked at the eyes approximately half of the time (Vatikiotis-Bateson, Eigsti, Yano & Munhall, 1998 ). It could be argued that this is due to the nature and length of the stimuli (45 s) as participants may be looking for social/emotional cues whilst listening to the narrative (Alsius, Wayne, Paré & Munhall, 2016 ). Another study found that participants focused more on the nose and mouth when sentences were presented in noise (multi-talker babble), again suggesting that the area directly surrounding the mouth is important (Buchan, Paré & Munhall, 2008 ).…”
Section: Introductionmentioning
confidence: 99%
“…When monologues were presented in high levels of background noise including music and multilingual talkers, participants looked at the eyes approximately half of the time (Vatikiotis-Bateson, Eigsti, Yano & Munhall, 1998 ). It could be argued that this is due to the nature and length of the stimuli (45 s) as participants may be looking for social/emotional cues whilst listening to the narrative (Alsius, Wayne, Paré & Munhall, 2016 ). Another study found that participants focused more on the nose and mouth when sentences were presented in noise (multi-talker babble), again suggesting that the area directly surrounding the mouth is important (Buchan, Paré & Munhall, 2008 ).…”
Section: Introductionmentioning
confidence: 99%
“…As noted by a reviewer this may have provided salient features and this may have influenced the perception. For example, previous studies have shown that viewers increase their gaze at the mouth region when images have increased resolution (e.g., Alsius et al, 2016), however, this strategy does not necessarily lead to improved performance. Although the participants in this study did not report that the dots inhibited or facilitated their decision, it cannot be precluded that their performance would have been more or less successful without the dots.…”
Section: Discussionmentioning
confidence: 99%
“…Visual information also could have been processed through peripheral channels, which are very important for audiovisual speech perception. For example, perceptual studies using spatially filtered audiovisual speech stimuli have shown that lower spatial frequencies are the main contributors to audiovisual speech perception (Munhall, Kroos et al 2004), although there are individual differences in sensitivity to high spatial frequency information in speech (Alsius, et al 2016). Furthermore, studies of audiovisual speech integration have shown that the McGurk effect (McGurk & MacDonald 1976) persists even when the visual stimulus is up to 40° in the periphery (Paré, et al 2003).…”
Section: Discussionmentioning
confidence: 99%