2022
DOI: 10.1101/2022.02.21.481292
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Visual and auditory cortices represent acoustic speech-related information during silent lip reading

Abstract: Speech is an intrinsically multisensory signal and seeing the speaker's lips forms a cornerstone of communication in acoustically impoverished environments. Still, it remains unclear how the brain exploits visual speech for comprehension and previous work debated whether lip signals are mainly processed along the auditory pathways or whether the visual system directly implements speech-related processes. To probe this question, we systematically characterized dynamic representations of multiple acoustic and vi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 87 publications
(164 reference statements)
0
2
0
Order By: Relevance
“…Researchers can apply it to unrelated sets of variables as well, where one assesses the effect of a variable while controlling for a third. Conditional MI was recently applied in the field of envelope tracking (Bröhl et al, 2022). Note, however, that the Gaussian copula approach has an upper limit on the number of dimensions one can use for accurate estimations of the covariance matrices.…”
Section: Discussionmentioning
confidence: 99%
“…Researchers can apply it to unrelated sets of variables as well, where one assesses the effect of a variable while controlling for a third. Conditional MI was recently applied in the field of envelope tracking (Bröhl et al, 2022). Note, however, that the Gaussian copula approach has an upper limit on the number of dimensions one can use for accurate estimations of the covariance matrices.…”
Section: Discussionmentioning
confidence: 99%
“…This level of detail complements the temporal resolution offered by MEG, as used by Brohl et al, enabling a more comprehensive understanding of how visual and auditory speech information is integrated at different neural levels. Our controlled study design, focusing on specific parameters like stimulus length and voice onset time (VOT), offers distinct advantages over naturalistic designs such as those used by Brohl et al(54). By minimizing confounds due to natural correlations between auditory and visual speech elements, we can more accurately isolate and test the encoding of visual speech information in the auditory cortex.…”
mentioning
confidence: 99%