2011
DOI: 10.1037/a0021683
|View full text |Cite
|
Sign up to set email alerts
|

Does audiovisual speech offer a fountain of youth for old ears? An event-related brain potential study of age differences in audiovisual speech perception.

Abstract: The current study addressed the question whether audiovisual (AV) speech can improve speech perception in older and younger adults in a noisy environment. Event-related potentials (ERPs) were recorded to investigate age-related differences in the processes underlying AV speech perception. Participants performed an object categorization task in three conditions, namely auditory-only (A), visual-only (V), and AVspeech. Both age groups revealed an equivalent behavioral AVspeech benefit over unisensory trials. ERP… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

16
108
6

Year Published

2013
2013
2021
2021

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 73 publications
(130 citation statements)
references
References 49 publications
16
108
6
Order By: Relevance
“…That is, the evidence of AV interaction (i.e., difference between AV and A+V waveform amplitudes) was evident during the timing of the Nl level for younger adults but, for older adults, it was clearly notable even earlier, at the titning of the preceding PI component (see Footnote 4). This finding is consistent with our previous work (Winneke & Phillips, 2011).…”
Section: Electrophysiological Resultssupporting
confidence: 95%
See 1 more Smart Citation
“…That is, the evidence of AV interaction (i.e., difference between AV and A+V waveform amplitudes) was evident during the timing of the Nl level for younger adults but, for older adults, it was clearly notable even earlier, at the titning of the preceding PI component (see Footnote 4). This finding is consistent with our previous work (Winneke & Phillips, 2011).…”
Section: Electrophysiological Resultssupporting
confidence: 95%
“…82, TI^ = ,15. por younger adults, there was no difference between the modality conditions, whereas for older adults, the PI was smaller in AV compared with the A-only and the A-l-V condition. Thus, older adults showed an earlier multisensory interaction than younger adults, replicating our previous findings (Winneke & Phillips, 2011). ms after presentation of the stimuli (Friedman, Kazmerski, & Fabiani, 1997;Watter et al, 2001), and therefore the values from the Pz electrodes were chosen for the analysis.P 3 amplitude.…”
Section: Electrophysiological Resultsmentioning
confidence: 63%
“…Such stimuli produce sharper sound onsets than our critical syllables because ours were taken from natural 3-syllable utterances with second-syllable stress, with initial consonants belonging to different consonant classes (i.e., the voiced dental fricative /d/ [ð], the velar approximant /g/ [ɣ̞ ], the voiceless velar fricative /j/ [x], and the nasal /n/ [n]). Of note however, is that others have observed N1/P2 peaks that are comparable in size (i.e., a $4 lV peak-to-peak amplitude, Ganesh, Berthommier, Vilain, Sato, & Schwartz, 2014), or even smaller than what we observed here (Winneke & Phillips, 2011). Moreover, even when N1/P2 amplitudes are of the usual size, lip-read information does not always suppress the N1 (Baart et al, 2014) and/or the P2 (Alsius et al, 2014).…”
Section: Experiments 2: Discussioncontrasting
confidence: 64%
“…As demonstrated by McGurk and MacDonald (1976), lip-read context can change perceived sound identity, and when it does, it triggers an auditory MMN response when the illusory AV stimulus is embedded in a string of congruent AV stimuli (e.g., Colin, Radeau, Soquet, & Deltenre, 2004;Colin et al, 2002;Saint-Amour, De Sanctis, Molholm, Ritter, & Foxe, 2007). When sound onset is sudden and does not follow repeated presentations of standard sounds, it triggers an N1/P2 complex (a negative peak at 100 ms followed by a positive peak at $200 ms) and it is well-documented that amplitude and latency of both peaks are modulated by lip-read speech (e.g., Alsius, Möttönen, Sams, Soto-Faraco, & Tiippana, 2014;Baart, Stekelenburg, & Vroomen, 2014;Besle, Fort, Delpuech, & Giard, 2004;Frtusova, Winneke, & Phillips, 2013;Klucharev, Möttönen, & Sams, 2003;Stekelenburg, Maes, van Gool, Sitskoorn, & Vroomen, 2013;van Wassenhove, Grant, & Poeppel, 2005;Winneke & Phillips, 2011). Thus, studies measuring both the MMN and the N1/P2 peaks indicate that lip-reading affects sound processing within 200 to 250 ms after sound onset.…”
Section: Introductionmentioning
confidence: 98%
“…Results showed that the gain as the difference in speech recogniton accuracies between AV and auditory-only (A) conditions occurred greatest at -12 dB SNR. Electrophysiological studies using electroencephalography (EEG) or functional magnetic resonance imaging (fMRI) have investigated multisensory integration in single-level noise environment (Winneke and Phillips 2011;Callan et al 2001Callan et al , 2003Bishop and Miller 2009). Less electrophysiological research, however, has been conducted on multisensory integration in different noise environments.…”
Section: Introductionmentioning
confidence: 98%