2015
DOI: 10.1002/brb3.407
|View full text |Cite
|
Sign up to set email alerts
|

Spatio‐temporal distribution of brain activity associated with audio‐visually congruent and incongruent speech and the McGurk Effect

Abstract: IntroductionSpatio‐temporal distributions of cortical activity to audio‐visual presentations of meaningless vowel‐consonant‐vowels and the effects of audio‐visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
6
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 150 publications
(306 reference statements)
1
6
0
Order By: Relevance
“…It may then be proposed that in the McGurk paradigm, both visual and auditory streams give rise to independent articulatory hypotheses, which must be reconciled to allow the extraction of a unified phonological representation during working memory encoding. This assertion is consistent with previous investigations of the McGurk paradigm which have proposed an initial discrepancy detection stage followed by later resolution/integration processes [ 48 50 ]. In congruent (i.e., audiovisual match) trials, minimal discrepancy is anticipated between initial visual-based and auditory-based articulatory hypotheses, allowing for rapid phonological encoding in working memory.…”
Section: Introductionsupporting
confidence: 93%
See 2 more Smart Citations
“…It may then be proposed that in the McGurk paradigm, both visual and auditory streams give rise to independent articulatory hypotheses, which must be reconciled to allow the extraction of a unified phonological representation during working memory encoding. This assertion is consistent with previous investigations of the McGurk paradigm which have proposed an initial discrepancy detection stage followed by later resolution/integration processes [ 48 50 ]. In congruent (i.e., audiovisual match) trials, minimal discrepancy is anticipated between initial visual-based and auditory-based articulatory hypotheses, allowing for rapid phonological encoding in working memory.…”
Section: Introductionsupporting
confidence: 93%
“…In the current study, ~15% of subjects (6/39) were classified as non-perceivers and excluded from further analysis as they did not perceive sensory fusion in at least 15% of McGurk trials [ 49 ]. The proportion of subjects not susceptible to the McGurk illusion is not universally reported in the literature, nor is there a standardized threshold subjects must meet to be considered a ‘perceiver / non-perceiver.’ This consideration notwithstanding, reported ranges for the proportion of non-perceiving subjects range from 0–54% [ 48 , 79 ] with an average of ~25% [ 54 ]. Among the subjects in the current study who reliably perceived the McGurk illusion, sensory fusion was reported in 73.5% of trials.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…These included channel groups corresponding to established sites of auditory, audiovisual, and visual speech processing, including frontal, left temporal, and central parietal channels [ 23 , 24 , 31 , 46 , 47 , 48 , 49 , 50 , 51 ]. Consistent with previous audiovisual oddball studies, adaptive means and peak latencies were calculated in 4 time windows of interest corresponding to the N1 (50–100 ms), P2 (100–150 ms), early MMN (150–200 ms), and late MMN (300–400 ms) [ 2 , 24 , 25 , 52 ]. Peak latencies were defined as the time of maximum voltage in time windows of interest, and adaptive means were calculated for each participant by averaging 10 samples (20 ms) on either side of the peak.…”
Section: Methodsmentioning
confidence: 99%
“…Among these present findings, the abnormalities of ALFF, ReHo and DC in the angular gyrus were especially noticeable. Recent researches have suggested that the angular gyrus is responsible for complex mental phenomena and processes, such as understanding visual and audio inputs [81], interpreting languages [82], retrieving memories [83], and maintaining consciousness [84]. Moreover, the angular gyrus has been demonstrated to be one of the overlapping regions between the default mode network and social brain networks [85].…”
Section: Reho Alterations In Patients With Sz and Ocdmentioning
confidence: 99%