2018
DOI: 10.1101/405597
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Phase resetting in human auditory cortex to visual speech

Abstract: Natural conversation is multisensory: when we can see the speaker's face, visual speech cues influence our perception of what is being said. The neuronal basis of this phenomenon remains unclear, though there is indication that neuronal oscillations-ongoing excitability fluctuations of neuronal populations in the brain-represent a potential mechanism. Investigating this question with intracranial recordings in humans, we show that some sites in auditory cortex track the temporal dynamics of unisensory visual s… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
5
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 80 publications
2
5
1
Order By: Relevance
“…Our findings contribute to the grown literature of studies showing how visual input can influence the auditory cortex, especially pSTG (Besle et al, 2008;Ferraro et al, 2019;Kayser et al, 2008;Megevand et al, 2019;Zion Golumbic et al, 2013). Together with previous work showing that visual cortex is modulated by the presence or absence of auditory speech (Schepers et al, 2015), audiovisual speech is a prime example of how cross-modal interactions are harnessed by all levels of the cortical processing hierarchy in the service of perception and cognition (Ghazanfar and Schroeder, 2006).…”
Section: Model Predictions and Summarysupporting
confidence: 78%
“…Our findings contribute to the grown literature of studies showing how visual input can influence the auditory cortex, especially pSTG (Besle et al, 2008;Ferraro et al, 2019;Kayser et al, 2008;Megevand et al, 2019;Zion Golumbic et al, 2013). Together with previous work showing that visual cortex is modulated by the presence or absence of auditory speech (Schepers et al, 2015), audiovisual speech is a prime example of how cross-modal interactions are harnessed by all levels of the cortical processing hierarchy in the service of perception and cognition (Ghazanfar and Schroeder, 2006).…”
Section: Model Predictions and Summarysupporting
confidence: 78%
“…Building on this suggestion, in a recent paper, we suggested that this visual preprocessing could selectively inhibit populations of neurons response to auditory phonemes incompatible with the observed visual mouth shape ( Karas et al, 2019 ). The model incorporates evidence that the pSTG contains neural populations that represent specific phonemes ( Formisano et al, 2008 ; Mesgarani et al, 2014 ; Hamilton et al, 2018 ) and that visual information influences processing in auditory cortex ( Calvert et al, 1997 ; Pekkola et al, 2005 ; Besle et al, 2008 ; Kayser et al, 2008 ; Zion Golumbic et al, 2013a , b ; Rhone et al, 2016 ; Megevand et al, 2018 ; Ferraro et al, 2020 ). Reduced responses in pSTG to audiovisual speech may reflect more efficient processing, with less neural resources required to decode the speech.…”
Section: Discussionmentioning
confidence: 99%
“…In contrast, identification of AV speech engages an extensive network of hierarchically-organized brain areas (Hickok and Poeppel, 2007;Peelle, 2019), mapping spectrotemporal representations to phonetic representations, and from there to lexical-semantic representations. Moreover, integration of auditory and visual speech cues may act through multiple integrative mechanisms, including early visual activation of auditory cortex, increasing perceptual sensitivity (Mégevand et al, 2018), and later integration of visual speech content (i.e., place and/or manner of articulation), reducing the density of phonemic and lexical bioRxiv March 21, 2019 neighborhoods (Tye- Murray et al, 2007;Peelle and Sommers, 2015). Clearly, task demands and stimuli play a major role in the patterns of multisensory deficits and recovery functions that are observed for any given experimental paradigm.…”
Section: Discussionmentioning
confidence: 99%
“…The task employed in the 657 current study required the speeded detection of simple AV stimuli, without discrimination, identification contrast, identification of AV speech engages an extensive network of hierarchically-organized brain areas 662 (Hickok and Poeppel, 2007;Peelle, 2019), projecting the spectrotemporal dynamics to a phonetic 663 representation and from there to a lexical-semantic one. Moreover, integration of auditory and visual 664 speech cues may act through multiple integrative mechanisms (see Peelle and Sommers, 2015); 1) an early 665 mechanism that provides information about the timing of the incoming acoustic input, activating auditory 666 cortex and increasing perceptual sensitivity (Megevand et al, 2018), 2) a later mechanism that provides 667 information about the content of a vocal utterance (i.e., place and/or manner of articulation), reducing 668 the density of phonemic and lexical neighborhoods (Tye- Murray et al, 2007). Clearly, task demands and 669 stimuli play a major role in the patterns of multisensory deficits and recovery functions that are observed 670 for any given experimental paradigm.…”
mentioning
confidence: 99%