2012
DOI: 10.1016/j.neuropsychologia.2012.01.010
|View full text |Cite
|
Sign up to set email alerts
|

Speech comprehension aided by multiple modalities: Behavioural and neural interactions

Abstract: Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources - e.g. voice, face, gesture, linguistic context - to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

11
69
2

Year Published

2013
2013
2020
2020

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 78 publications
(82 citation statements)
references
References 114 publications
11
69
2
Order By: Relevance
“…However, it may also be that posterior STS is active in determining relative weighting or attention directed at complementary modalities. Consistent with this view, activity in posterior STS also appears to distinguish between variations in visual clarity to a greater degree when speech is less intelligible (McGettigan et al, 2012). 3 …”
Section: Neural Mechanisms Supporting Audiovisual Speech Processingmentioning
confidence: 74%
“…However, it may also be that posterior STS is active in determining relative weighting or attention directed at complementary modalities. Consistent with this view, activity in posterior STS also appears to distinguish between variations in visual clarity to a greater degree when speech is less intelligible (McGettigan et al, 2012). 3 …”
Section: Neural Mechanisms Supporting Audiovisual Speech Processingmentioning
confidence: 74%
“…Although a whole tradition of behavioral studies have laid the ground work for understanding cognitive processes in adverse listening conditions (Mattys et al, 2009;Davis & Johnsrude, 2003;Pichora-Fuller, 2003;Stickney & Assmann, 2001;Kalikow et al, 1977;Miller et al, 1951), only a few neuroimaging studies (e.g., McGettigan et al, 2012;Davis, Ford, Kherif, & Johnsrude, 2011;Obleser & Kotz, 2010;Obleser, Wise, Alex Dresner, & Scott, 2007) and EEG studies (e.g., Boulenger, Hoen, Jacquier, & Meunier, 2011;Obleser & Kotz, 2011;Romei, Wambacq, Besing, Koehnke, & Jerger, 2011;Aydelott, Dick, & Mills, 2006;Connolly, Phillips, Stewart, & Brake, 1992) have taken on the issue of semantic or expectancy benefits in degraded speech.…”
Section: Semantic Benefits In Adverse Listeningmentioning
confidence: 99%
“…For example, when the auditory and visual information do not match, auditory stimuli can be misperceived, as in the well-documented McGurk effect (McGurk & MacDonald, 1976): An auditory signal /ba / simultaneously presented with a visual /ga / often results in an illusionary percept /da /. Furthermore, a large body of empirical work has shown that viewing a speaker's mouth movements provides additional information that can improve auditory speech perception, particularly when the auditory signal is masked by background sounds (e.g., Bishop & Miller, 2009;McGettigan et al, 2012;Ross, Saint-Amour, Leavitt, Javitt, & Foxe, 2007;Sánchez-García, Alsius, Enns, & Soto-Faraco, 2011;Summerfield, MacLeod, McGrath, & Brooke, 1989).…”
mentioning
confidence: 99%