Native listeners make use of higher-level, context-driven semantic and linguistic information during the perception of speech-in-noise. In a recent behavioral study, using a new paradigm that isolated the semantic level of speech by using words, we showed that this native-language benefit is at least partly driven by semantic context (Golestani et al., 2009). Here, we used the same paradigm in a functional magnetic resonance imaging (fMRI) experiment to study the neural bases of speech intelligibility, as well as to study the neural bases of this semantic context effect in the native language. A forced-choice recognition task on the first of two auditorily presented semantically related or unrelated words was employed, where the first, 'target' word was embedded in different noise levels. Results showed that activation in components of the brain language network, including Broca's area and the left posterior superior temporal sulcus, as well as brain regions known to be functionally related to attention and task difficulty, was modulated by stimulus intelligibility. In line with several previous studies examining the role of linguistic context in the intelligibility of degraded speech at the sentence level, we found that activation in the angular gyrus of the left inferior parietal cortex was modulated by the presence of semantic context, and further, that this modulation depended on the intelligibility of the speech stimuli. Our findings help to further elucidate neural mechanisms underlying the interaction of context-driven and signal-driven factors during the perception of degraded speech, and this specifically at the semantic level.© 2013 Elsevier Inc. All rights reserved.
IntroductionIn studying the neural implementation of spoken language processing, it is important to consider the complexity of linguistic processes. For example, one can ask how higher-order, semantic versus lower-order, perceptual processes interact during the processing of noisy speech -a phenomenon that is ubiquitous in our daily lives, and how the brain supports the interaction of these complementary cognitive and perceptual dimensions (Mattys et al., 2009). It is known that in one's native language, speech comprehension is often successful even when hearing noisy, or degraded speech (Nabelek and Donahue, 1984;Takata and Nabelek, 1990;van Wijngaarden et al., 2002). Further, using the Speech Perception in Noise (SPIN) paradigm (Bilger et al., 1984;Kalikow et al., 1977), in which the predictability of the final word in sentences is manipulated, it has been shown that this native language advantage can arise from the use of higher-level linguistic, contextual information (Florentine, 1985a;Mayo et al., 1997). These original studies further showed that lower SNRs (or higher noise levels) are associated with a greater context benefit (Mayo et al., 1997). We are thus capable of making use of linguistic context to compensate for poor signal quality. Linguistic context includes semantic and syntactic information, as well as pragmatic and proso...