2009
DOI: 10.1186/1471-2202-10-127
|View full text |Cite
|
Sign up to set email alerts
|

Electrophysiological evidence for an early processing of human voices

Abstract: Background: Previous electrophysiological studies have identified a "voice specific response" (VSR) peaking around 320 ms after stimulus onset, a latency markedly longer than the 70 ms needed to discriminate living from non-living sound sources and the 150 ms to 200 ms needed for the processing of voice paralinguistic qualities. In the present study, we investigated whether an early electrophysiological difference between voice and non-voice stimuli could be observed.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

21
113
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
8
1

Relationship

2
7

Authors

Journals

citations
Cited by 113 publications
(134 citation statements)
references
References 58 publications
21
113
0
Order By: Relevance
“…3 Illustration of grand average Braw^waveforms for the SGV and NSV standard and deviant stimuli, in both the VCC and WCC Miller, 2004;Belin et al, 2011;Belin et al, 2004;Kaganovich et al, 2006;Schweinberger et al, 2014). Importantly, the concurrent processing of vocal information was found to take place in early stages of information processing-that is, within the first 200 ms after voice stimulus onset (Beauchemin et al, 2006;Charest et al, 2009;Holeckova, Fischer, Giard, Delpuech, & Morlet, 2006;Kaganovich et al, 2006;Knösche, Lattner, Maess, Schauer, & Friederici, 2002;Titova & Näätänen, 2001). Besides the parallel processing of voice information, MMN studies demonstrated that concurrent linguistic processes (e.g., phonological, lexical, semantic, grammatical, and pragmatic) occur very early in the information-processing stream within the MMN time window (Kujala et al, 2010;Kujala, Tervaniemi, & Schröger, 2007;Näätänen et al, 2007;Pakarinen et al, 2009;E.…”
Section: Discussionmentioning
confidence: 99%
“…3 Illustration of grand average Braw^waveforms for the SGV and NSV standard and deviant stimuli, in both the VCC and WCC Miller, 2004;Belin et al, 2011;Belin et al, 2004;Kaganovich et al, 2006;Schweinberger et al, 2014). Importantly, the concurrent processing of vocal information was found to take place in early stages of information processing-that is, within the first 200 ms after voice stimulus onset (Beauchemin et al, 2006;Charest et al, 2009;Holeckova, Fischer, Giard, Delpuech, & Morlet, 2006;Kaganovich et al, 2006;Knösche, Lattner, Maess, Schauer, & Friederici, 2002;Titova & Näätänen, 2001). Besides the parallel processing of voice information, MMN studies demonstrated that concurrent linguistic processes (e.g., phonological, lexical, semantic, grammatical, and pragmatic) occur very early in the information-processing stream within the MMN time window (Kujala et al, 2010;Kujala, Tervaniemi, & Schröger, 2007;Näätänen et al, 2007;Pakarinen et al, 2009;E.…”
Section: Discussionmentioning
confidence: 99%
“…To date, there has been no focused attempt to directly compare electrophysiological responses to each stimulus type in an a priori, controlled manner, although research has combined speech and non-speech vocal stimuli in the same experiment to differentiate these stimuli from other sound categories such as non-human sounds or music (Charest et al, 2009;Rigoulot, Pell & Armony, 2015).…”
Section: Erp Studies Of Vocal Emotion Expressionsmentioning
confidence: 99%
“…Effects recently reported at 164 ms appeared to be driven by the speech content of the stimuli [cf. Charest et al (2009), their Fig. 4].…”
Section: Discussionmentioning
confidence: 99%
“…But this effect may instead reflect living versus man-made categorization (Murray et al, 2006) because voices were only contrasted with musical instruments. Charest et al (2009) compared responses to human vocalizations (speech and nonspeech) with those to environmental sounds or bird songs. Voice-related AEP waveform modulations began 164 ms after stimulus onset, but additional analyses revealed their effect was mostly (if not wholly) driven by the speech content of the stimuli and/or acoustic differences.…”
Section: Introductionmentioning
confidence: 99%