2022
DOI: 10.1523/eneuro.0209-22.2022
|View full text |Cite
|
Sign up to set email alerts
|

MEG Activity in Visual and Auditory Cortices Represents Acoustic Speech-Related Information during Silent Lip Reading

Abstract: Speech is an intrinsically multisensory signal, and seeing the speaker’s lips forms a cornerstone of communication in acoustically impoverished environments. Still, it remains unclear how the brain exploits visual speech for comprehension. Previous work debated whether lip signals are mainly processed along the auditory pathways or whether the visual system directly implements speech-related processes. To probe this, we systematically characterized dynamic representations of multiple acoustic and visual speech… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 79 publications
1
7
0
Order By: Relevance
“…Our current study demonstrates that visual speech information can be decoded in the auditory region, aligning with a growing body of research, including the findings of Brohl et al (54), which underscores the multisensory nature of speech processing. While both studies underscore auditory cortex’s role in integrating visual speech, our approach, utilizing a combination of fMRI and intracranial EEG, provides unique spatial precision.…”
Section: Discussionsupporting
confidence: 90%
“…Our current study demonstrates that visual speech information can be decoded in the auditory region, aligning with a growing body of research, including the findings of Brohl et al (54), which underscores the multisensory nature of speech processing. While both studies underscore auditory cortex’s role in integrating visual speech, our approach, utilizing a combination of fMRI and intracranial EEG, provides unique spatial precision.…”
Section: Discussionsupporting
confidence: 90%
“…Neural speech tracking is widely used to study the neural processing of continuous speech, though primarily with audio-only stimuli (Brodbeck, Hong, et al, 2018;Chalas et al, 2022;Di Liberto et al, 2015;Keitel et al, 2018). Recent studies have used audiovisual speech settings, but without directly modeling the visual speech features Golumbic et al, 2013) or not incorporating their temporal dynamics due to the use of frequency-based methods (Aller et al, 2022;Bröhl et al, 2022;Park et al, 2016). Here, we show, for the first time, the temporal dynamics and cortical origins of TRFs obtained from lip movements in an audiovisual setting with one or two speakers.…”
Section: Discussionmentioning
confidence: 99%
“…MI was then computed between the brain signal and the 3000 shuffled envelope/surprise signals. The group level mean was then tested against the 95th percentile of the random group mean distribution, essentially implementing a one-sided randomisation test at p < .05 (Brohl et al ., 2022).…”
Section: Methodsmentioning
confidence: 99%