2005
DOI: 10.1093/cercor/bhi040
|View full text |Cite
|
Sign up to set email alerts
|

Neural Substrates of Phonemic Perception

Abstract: The temporal lobe in the left hemisphere has long been implicated in the perception of speech sounds. Little is known, however, regarding the specific function of different temporal regions in the analysis of the speech signal. Here we show that an area extending along the left middle and anterior superior temporal sulcus (STS) is more responsive to familiar consonant-vowel syllables during an auditory discrimination task than to comparably complex auditory patterns that cannot be associated with learned phone… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

30
276
6

Year Published

2007
2007
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 371 publications
(312 citation statements)
references
References 77 publications
30
276
6
Order By: Relevance
“…Thus, the above dissociations that we find in performance with the two stimulus types may be due not only to the fact that one is speech and the other not, but they may also be due to stimulus complexity. In support of the idea that different learning mechanisms exist for speech and non-speech sounds, there exists evidence from other studies for speech-specific neural mechanisms when stimulus complexity is controlled (Liebenthal, Binder, Spitzer, Possing, & Medler, 2005;Scott, Blank, Rosen, & Wise, 2000;Scott, Rosen, Lang, & Wise, 2006), or when the same stimuli are first perceived as nonspeech and then later as speech (Dehaene-Lambertz et al, 2005;Dufor, Serniclaes, Sprenger-Charolles, & Démonet, 2007). More generally, it is likely that the neural mechanisms underlying the processing of abstract, linguistically relevant properties versus of the underlying acoustic characteristics of stimuli interact in a complex and non-exclusive manner, and that they depend on linguistic experience as well as on neural top-down processing mechanisms which interact with afferent pathways which carry stimulus information (Zatorre & Gandour, 2007).…”
Section: Discussionmentioning
confidence: 79%
“…Thus, the above dissociations that we find in performance with the two stimulus types may be due not only to the fact that one is speech and the other not, but they may also be due to stimulus complexity. In support of the idea that different learning mechanisms exist for speech and non-speech sounds, there exists evidence from other studies for speech-specific neural mechanisms when stimulus complexity is controlled (Liebenthal, Binder, Spitzer, Possing, & Medler, 2005;Scott, Blank, Rosen, & Wise, 2000;Scott, Rosen, Lang, & Wise, 2006), or when the same stimuli are first perceived as nonspeech and then later as speech (Dehaene-Lambertz et al, 2005;Dufor, Serniclaes, Sprenger-Charolles, & Démonet, 2007). More generally, it is likely that the neural mechanisms underlying the processing of abstract, linguistically relevant properties versus of the underlying acoustic characteristics of stimuli interact in a complex and non-exclusive manner, and that they depend on linguistic experience as well as on neural top-down processing mechanisms which interact with afferent pathways which carry stimulus information (Zatorre & Gandour, 2007).…”
Section: Discussionmentioning
confidence: 79%
“…Several recent studies comparing phonetic sounds to acoustically matched nonphonetic sounds (Dehaene-Lambertz et al, 2005;Liebenthal et al, 2005;Mottonen et al, 2006) or to noise (Binder et al, 2000;Rimol et al, 2005) have shown activation specifically in this brain region. Two factors might explain these discordant findings.…”
Section: Processing Of Speech Compared To Unfamiliar Rotated Speech Smentioning
confidence: 99%
“…3a), including auditory areas in the superior temporal cortex (Heschl's gyrus), multiple regions in the planum temporale (PT), superior temporal gyrus and sulcus (STG and STS, respectively), as well as middle temporal gyrus, insula cortex, inferior parietal cortex, inferior frontal cortex, and supramarginal gyrus (Davis and Johnsrude, 2003;Liebenthal et al, 2005;Hickok and Poeppel, 2007;Desai et al, 2008). A univariate comparison between trials perceptually classified as /aba/ and those classified as /ada/ did not yield significant differences in activation (q ϭ 0.05, corrected for multiple comparisons with false discovery rate).…”
Section: Univariate Statistical Analysismentioning
confidence: 99%
“…According to popular models of auditory processing, representations become more abstract with hierarchical distance to the primary auditory cortex (A1) along two (what/where) pathways (Scott and Johnsrude, 2003;Liebenthal et al, 2005;Rauschecker and Scott, 2009). In humans, the regions adjacent to the Heschl's gyrus, which we refer to as early auditory cortex, are supposedly restricted to the analysis of physical features and the acoustic structure of sounds.…”
Section: Introductionmentioning
confidence: 99%