2017
DOI: 10.1101/124750
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Cortical Representations of Speech in a Multi-talker Auditory Scene

Abstract: The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple sp… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
10
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(12 citation statements)
references
References 54 publications
2
10
0
Order By: Relevance
“…The results from Experiment III are consistent with the neural representation of a sound's temporal regularity in auditory cortex being unaffected by attention, whereas an attentionally demanding visual task suppresses the neural representation of the regularity in higher-level brain regions. This is also in line with the well-accepted view of auditory processing that assumes that neural activity in hierarchically lower regions in the auditory system is mostly sensitive to acoustic properties of sounds and less receptive to a listener's attentional state, whereas neural activity in hierarchically higher regions is more sensitive to the attentional state of a listener (whether a listener attends to auditory stimuli or ignores them; (Davis et al, 2007;Wild et al, 2012;Puvvada and Simon, 2017;Holmes et al, in press). We speculate that a listener's attentional state affects the progression of the neural representation of a sound's temporal regularity from auditory cortex to higher-level brain regions, and that this progression is suppressed in situations with distracting visual stimulation.…”
Section: Neural Synchronization and Sustained Activity May Reflect DIsupporting
confidence: 86%
“…The results from Experiment III are consistent with the neural representation of a sound's temporal regularity in auditory cortex being unaffected by attention, whereas an attentionally demanding visual task suppresses the neural representation of the regularity in higher-level brain regions. This is also in line with the well-accepted view of auditory processing that assumes that neural activity in hierarchically lower regions in the auditory system is mostly sensitive to acoustic properties of sounds and less receptive to a listener's attentional state, whereas neural activity in hierarchically higher regions is more sensitive to the attentional state of a listener (whether a listener attends to auditory stimuli or ignores them; (Davis et al, 2007;Wild et al, 2012;Puvvada and Simon, 2017;Holmes et al, in press). We speculate that a listener's attentional state affects the progression of the neural representation of a sound's temporal regularity from auditory cortex to higher-level brain regions, and that this progression is suppressed in situations with distracting visual stimulation.…”
Section: Neural Synchronization and Sustained Activity May Reflect DIsupporting
confidence: 86%
“…Critical insights into the neural underpinnings of selective attention to speech have been provided by imaging techniques such as magnetoencephalography (MEG) and electrocorticography (ECoG). In particular, studies employing these techniques have characterized the internal representations of attended and unattended speech streams at multiple levels along the auditory cortical hierarchy (Ding & Simon, , ; Golumbic et al., ; Mesgarani & Chang, ; Puvvada & Simon, ).…”
Section: Introductionmentioning
confidence: 99%
“…The mechanisms involved in this ability are not well understood, but previous research 47 suggests at least two separable cortical processing stages. In 48 magnetoencephalographic (MEG) recordings of subjects listening to multiple talkers 49 (Puvvada and Simon, 2017), the early (~50 ms) cortical response component is better 50…”
Section: Introduction 34mentioning
confidence: 99%
“…Interference between individual speech sources, for a small number of talkers, is 86 relatively hard to discern in acoustic envelope features (e.g., the speech spectrogram); 87 quantitatively, the envelope of the acoustic mixture is strongly correlated with the sum of 88 the envelopes of the individual speech sources (Puvvada and Simon, 2017 separate onsets lead to perceptual segregation (Bregman et al, 1994a(Bregman et al, , 1994b. For 102 example, the onset of a vowel is characterized by a shared onset at the fundamental 103 frequency of the voice and its harmonics.…”
Section: Introduction 34mentioning
confidence: 99%