2009
DOI: 10.1371/journal.pone.0004638
|View full text |Cite
|
Sign up to set email alerts
|

Lip-Reading Aids Word Recognition Most in Moderate Noise: A Bayesian Explanation Using High-Dimensional Feature Space

Abstract: Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

20
169
6

Year Published

2010
2010
2024
2024

Publication Types

Select...
7
2

Relationship

2
7

Authors

Journals

citations
Cited by 165 publications
(195 citation statements)
references
References 86 publications
(123 reference statements)
20
169
6
Order By: Relevance
“…Adults with age-related deterioration in the auditory system will have familiarity with using other contextual cues (including lip-reading) to help disambiguate auditory speech information. Bayesian models of audiovisual speech integration suggest that the modality with less confusion should play a larger role for optimal integration [70,71], and behavioral studies confirm that information from the most reliable modality is given greater weight during perception of multisensory speech stimuli [72,73]. Supporting this view, the unisensory accuracy of both older groups was greater for identifying speech information in the visual modality than in the auditory modality.…”
Section: Discussionmentioning
confidence: 86%
“…Adults with age-related deterioration in the auditory system will have familiarity with using other contextual cues (including lip-reading) to help disambiguate auditory speech information. Bayesian models of audiovisual speech integration suggest that the modality with less confusion should play a larger role for optimal integration [70,71], and behavioral studies confirm that information from the most reliable modality is given greater weight during perception of multisensory speech stimuli [72,73]. Supporting this view, the unisensory accuracy of both older groups was greater for identifying speech information in the visual modality than in the auditory modality.…”
Section: Discussionmentioning
confidence: 86%
“…Although online effects of vision on audition are well known (Ma, Zhou, Ross, Foxe, & Parra, 2009;McGurk & MacDonald, 1976), less is known about the degree to which auditory information affects online visual processing and visual attention. There is now accumulating evidence that sounds affect visual perception (Sekuler, Sekuler, & Lau, 1997;Shams, Kamitani, & Shimojo, 2002), with modulations of early visual cortex by sounds detected in as little as 35-65 msec (Shams, Iwaki, Chawla, & Bhattacharya, 2005).…”
Section: Discussionmentioning
confidence: 99%
“…Interestingly, this profile was observed in 49% of the tested neurons. In the case of psychophysics, this principle is often applied to the observation of larger benefits of multisensory stimulation when the unisensory stimuli are themselves near threshold and/or noisy (Ma, Zhou, Ross, Foxe, & Parra, 2009;Bolognini, Leo, Passamonti, Stein, & Làdavas, 2007;Rach & Diederich, 2006;Diederich & Colonius, 2004;Grant & Seitz, 2000;Sumby & Pollack, 1954). This principle can also be applied when considering the perceptual benefits of multisensory interactions in the case of sensory deficits (Rouger et al, 2007;Laurienti et al, 2006;Hairston, Laurienti, Mishra, Burdette, & Wallace, 2003).…”
Section: Discussionmentioning
confidence: 99%