Conduction aphasia is a language disorder characterized by frequent speech errors, impaired verbatim repetition, a deficit in phonological short-term memory, and naming difficulties in the presence of otherwise fluent and grammatical speech output. While traditional models of conduction aphasia have typically implicated white matter pathways, recent advances in lesions reconstruction methodology applied to groups of patients have implicated left temporoparietal zones. Parallel work using functional magnetic resonance imaging (fMRI) has pinpointed a region in the posterior most portion of the left planum temporale, area Spt, which is critical for phonological working memory. Here we show that the region of maximal lesion overlap in a sample of 14 patients with conduction aphasia perfectly circumscribes area Spt, as defined in an aggregate fMRI analysis of 105 subjects performing a phonological working memory task. We provide a review of the evidence supporting the idea that Spt is an interface site for the integration of sensory and vocal tract-related motor representations of complex sound sequences, such as speech and music and show how the symptoms of conduction aphasia can be explained by damage to this system.
Hierarchical organization of human auditory cortex has been inferred from functional imaging observations that core regions respond to simple stimuli (tones) whereas downstream regions are selectively responsive to more complex stimuli (band-pass noise, speech). It is assumed that core regions code low-level features, which are combined at higher levels in the auditory system to yield more abstract neural codes. However, this hypothesis has not been critically evaluated in the auditory domain. We assessed sensitivity to acoustic variation within intelligible versus unintelligible speech using functional magnetic resonance imaging and a multivariate pattern analysis. Core auditory regions on the dorsal plane of the superior temporal gyrus exhibited high levels of sensitivity to acoustic features, whereas downstream auditory regions in both anterior superior temporal sulcus and posterior superior temporal sulcus (pSTS) bilaterally showed greater sensitivity to whether speech was intelligible or not and less sensitivity to acoustic variation (acoustic invariance). Acoustic invariance was most pronounced in more pSTS regions of both hemispheres, which we argue support phonological level representations. This finding provides direct evidence for a hierarchical organization of human auditory cortex and clarifies the cortical pathways supporting the processing of intelligible speech.
Processing incoming sensory information and transforming this input into appropriate motor responses is a critical and ongoing aspect of our moment-to-moment interaction with the environment. While the neural mechanisms in the posterior parietal cortex (PPC) that support the transformation of sensory inputs into simple eye or limb movements has received a great deal of empirical attention-in part because these processes are easy to study in nonhuman primates-little work has been done on sensory-motor transformations in the domain of speech. Here we used functional magnetic resonance imaging and multivariate analysis techniques to demonstrate that a region of the planum temporale (Spt) shows distinct spatial activation patterns during sensory and motor aspects of a speech task. This result suggests that just as the PPC supports sensorimotor integration for eye and limb movements, area Spt forms part of a sensory-motor integration circuit for the vocal tract.
Data from lesion studies suggests that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated words better than one would expect if their speech perception system had been largely destroyed (70-80% accuracy). Further, when comprehension fails in such patients their errors are more often semantically-based, thanphonemically based. The question addressed by the present study is whether this ability of the right hemisphere to process speech sounds is a result of plastic reorganization following chronic left hemisphere damage, or whether the ability exists in undamaged language systems. We sought to test these possibilities by studying auditory comprehension in acute left versus right hemisphere deactivation during Wada procedures. A series of 20 patients undergoing clinically indicated Wada procedures were asked to listen to an auditorily presented stimulus word, and then point to its matching picture on a card that contained the target picture, a semantic foil, a phonemic foil, and an unrelated foil. This task was performed under three conditions, baseline, during left carotid injection of sodium amytal, and during right carotid injection of sodium amytal. Overall, left hemisphere injection led to a significantly higher error rate than right hemisphere injection. However, consistent with lesion work, the majority (75%) of these errors were semantic in nature. These findings suggest that auditory comprehension deficits are predominantly semantic in nature, even following acute left hemisphere disruption. This, in turn, supports the hypothesis that the right hemisphere is capable of speech sound processing in the intact brain.
General agreement exists that dorsal aspects of the temporal lobe support the perception of speech but there is less agreement regarding the mapping between levels of speech processing and neural regions within the dorsal temporal lobe. The present experiment sought to identify temporal lobe regions that support one such level, namely, lexical-phonological representation/processing. To do this, we manipulated phonological neighborhood density, a variable that affects processing within lexical-phonological networks. In a functional magnetic resonance imaging experiment, 10 participants listened to blocks of either high-density or low-density words. High-density words produced significantly more activation in the posterior half of the superior temporal sulcus bilaterally, suggesting that these regions are involved in lexical-phonological processing networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.