2009
DOI: 10.1523/jneurosci.3694-08.2009
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic and Task-Dependent Encoding of Speech and Voice by Phase Reorganization of Cortical Oscillations

Abstract: Speech and vocal sounds are at the core of human communication. Cortical processing of these sounds critically depends on behavioral demands. However, the neurocomputational mechanisms enabling this adaptive processing remain elusive. Here we examine the taskdependent reorganization of electroencephalographic responses to natural speech sounds (vowels /a/, /i/, /u/) spoken by three speakers (two female, one male) while listeners perform a one-back task on either vowel or speaker identity. We show that dynamic … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

4
36
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 43 publications
(40 citation statements)
references
References 45 publications
4
36
0
Order By: Relevance
“…This previous work extends the findings on the functional contribution of alpha oscillations to inhibitory control (Klimesch, Sauseng, & Hanslmayr, 2007) and shows that the precise timing of alpha oscillations promotes sensory speech processing as well. On the basis of the work of Bonte et al (2009), the increased connectivity in the alpha band we revealed in musicians is interpreted as reflecting a training-related tuning of bilateral auditory-related brain regions during speech processing.…”
Section: Group Differences In Interhemispheric Connectivity Between Hmentioning
confidence: 77%
See 1 more Smart Citation
“…This previous work extends the findings on the functional contribution of alpha oscillations to inhibitory control (Klimesch, Sauseng, & Hanslmayr, 2007) and shows that the precise timing of alpha oscillations promotes sensory speech processing as well. On the basis of the work of Bonte et al (2009), the increased connectivity in the alpha band we revealed in musicians is interpreted as reflecting a training-related tuning of bilateral auditory-related brain regions during speech processing.…”
Section: Group Differences In Interhemispheric Connectivity Between Hmentioning
confidence: 77%
“…Finally, previous work has elucidated that alpha oscillations have the faculty to temporal realign their phase while processing vowels in a task-dependent manner (Bonte, Valente, & Formisano, 2009). This previous work extends the findings on the functional contribution of alpha oscillations to inhibitory control (Klimesch, Sauseng, & Hanslmayr, 2007) and shows that the precise timing of alpha oscillations promotes sensory speech processing as well.…”
Section: Group Differences In Interhemispheric Connectivity Between Hmentioning
confidence: 99%
“…In the case of a phase-reset mechanism, changes in evoked responses might indicate adaptive fine-tuning of neuronal oscillations for current processing demands (Bonte et al 2009;Hanslmayr et al 2006). On the other hand, ER changes consistent with added-energy mechanism would reflect either the number of neurons being recruited for the current processing demands or their mean activity (Jones et al 2007).…”
Section: On the Importance Of Differentiation Between Added-energy Anmentioning
confidence: 99%
“…In an early time range (approximately 100 ms after sound onset), effects of the degree of speech degradation are expected on the N100 component which has been shown to index early abstraction and percept-formation stages (Krumbholz et al, 2003;Naatanen, 2001;Obleser et al, 2006), especially in the context of an active comprehension task (for task effects on the N100/N100m see Bonte et al, 2009;Obleser et al, 2004a;Poeppel et al, 1996). Also based on a recent study finding enhanced N100 amplitudes in response to degraded sound (Miettinen et al, 2010), we expect the following: the more thorough the degradation of the signal, the more neural effort is likely be allocated to encoding the acoustic signal and mapping it onto known phonological and lexical categories, leading to an enhanced N100 amplitude measured on the scalp.…”
Section: Introductionmentioning
confidence: 99%