Cortical processing of arithmetic and of language rely on both shared and task-specific neural mechanisms, which should also be dissociable from the particular sensory modality used to probe them. Here, spoken arithmetical and non-mathematical statements were employed to investigate neural processing of arithmetic, compared with general language processing, in an attention-modulated cocktail party paradigm. Magnetoencephalography (MEG) data were recorded from 22 human subjects listening to audio mixtures of spoken sentences and arithmetic equations while selectively attending to one of the two speech streams. Short sentences and simple equations were presented diotically at fixed and distinct word/symbol and sentence/equation rates. Critically, this allowed neural responses to acoustics, words, and symbols to be dissociated from responses to sentences and equations. Indeed, the simultaneous neural processing of the acoustics of words and symbols was observed in auditory cortex for both streams. Neural responses to sentences and equations, however, were predominantly to the attended stream, originating primarily from left temporal, and parietal areas, respectively. Additionally, these neural responses were correlated with behavioral performance in a deviant detection task. Source-localized temporal response functions (TRFs) revealed distinct cortical dynamics of responses to sentences in left temporal areas and equations in bilateral temporal, parietal, and motor areas. Finally, the target of attention could be decoded from MEG responses, especially in left superior parietal areas. In short, the neural responses to arithmetic and language are especially well segregated during the cocktail party paradigm, and the correlation with behavior suggests that they may be linked to successful comprehension or calculation.
Numerous studies have suggested that the perception of a target sound stream (or source) can only be segregated from a complex acoustic background mixture if the acoustic features underlying its perceptual attributes (e.g., pitch, location, and timbre) induce temporally modulated responses that are mutually correlated (or coherent), and that are uncorrelated (incoherent) from those of other sources in the mixture. This “temporal coherence” hypothesis asserts that attentive listening to one acoustic feature of a target enhances brain responses to that feature but would also concomitantly (1) induce mutually excitatory influences with other coherently responding neurons, thus enhancing (or binding) them all as they respond to the attended source; By contrast, (2) suppressive interactions are hypothesized to build up among neurons driven by temporally incoherent sound features, thus relatively reducing their activity. In this study, we report on EEG measurements in human subjects engaged in various sound segregation tasks that demonstrate rapid binding among the temporally coherent features of the attended source regardless of their identity (pure tone components, tone complexes, or noise), harmonic relationship, or frequency separation, thus confirming the key role temporal coherence plays in the analysis and organization of auditory scenes.
Cortical processing of arithmetic and of language rely on both shared and task-specific neural mechanisms, which should also be dissociable from the particular sensory modality used to probe them. Here, spoken arithmetical and non-mathematical statements were employed to investigate neural processing of arithmetic, compared to general language processing, in an attention-modulated cocktail party paradigm. Magnetoencephalography data was recorded from 22 subjects listening to both sentences and arithmetic equations while selectively attending to one of the two speech streams. Short sentences and simple equations were presented diotically at fixed and distinct word/symbol and sentence/equation rates. Critically, this allowed neural responses to acoustics, words, and symbols to be dissociated from responses to sentences and equations. Indeed, the simultaneous neural processing of the acoustics of words and symbols was observed in auditory cortex for both streams. Neural tracking of sentences and equations, however, was predominantly of the attended stream, and originated primarily from left temporal, and parietal areas, respectively. Additionally, these neural responses were correlated with behavioral performance in a deviant detection task. Source-localized Temporal Response Functions revealed cortical dynamics of distinct responses to sentences in left temporal areas and equations in bilateral temporal, parietal and motor areas. Finally, the target of attention could be decoded from responses, especially in left superior parietal areas. In short, the neural responses to arithmetic and language are especially well segregated during the cocktail party paradigm, and the correlation with behavior suggests that these neural responses are linked to successful comprehension or calculation.Significance StatementNeural processing of arithmetic may rely on dedicated, modality independent cortical networks that are distinct from those underlying language processing. Using a simultaneous cocktail party listening paradigm, we found that these separate networks segregate naturally when listeners selectively attend to one type over the other. Time-locked activity in the left temporal lobe was observed for responses to both spoken sentences and equations, but the latter additionally showed bilateral parietal activity consistent with arithmetic processing. Critically, these responses were modulated by selective attention and correlated with task behavior, consistent with reflecting high-level processing for speech comprehension or correct calculations. The response dynamics show task-related differences that were used to reliably decode the attentional target of sentences or equations.
Seeking exposure to unfamiliar experiences constitutes an essential aspect of the human condition, and the brain must adapt to the constantly changing environment by learning the evolving statistical patterns emerging from it. Cultures are shaped by norms and conventions and therefore novel exposure to an unfamiliar culture induces a type of learning that is often described as implicit: when exposed to a set of stimuli constrained by unspoken rules, cognitive systems must rapidly build a mental representation of the underlying grammar. Music offers a unique opportunity to investigate this implicit statistical learning, as sequences of tones forming melodies exhibit structural properties learned by listeners during short- and long-term exposure. Understanding which specific structural properties of music enhance learning in naturalistic learning conditions reveals hard-wired properties of cognitive systems while elucidating the prevalence of these features across cultural variations. Here we provide behavioral and neural evidence that the prevalence of non-uniform musical scales may be explained by their facilitating effects on melodic learning. In this study, melodies were generated using an artificial grammar with either a uniform (rare) or non-uniform (prevalent) scale. After a short exposure phase, listeners had to detect ungrammatical new melodies while their EEG responses were recorded. Listeners' performance on the task suggested that the extent of statistical learning during music listening depended on the musical scale context: non-uniform scales yielded better syntactic learning. This behavioral effect was mirrored by enhanced encoding of musical syntax in the context of non-uniform scales, which further suggests that their prevalence stems from fundamental properties of learning.
Numerous studies have suggested that the perception of a target sound source can only be segregated from a complex acoustic background if the acoustic features underlying its perceptual attributes (e.g., pitch, location, and timbre) induce temporally modulated responses that are mutually correlated, and that are uncorrelated from those of other sources in the mixture. This “temporal coherence” hypothesis asserts that listening attentively to one or a subset of attributes of a target source enhances their neural responses and concomitantly enhances all other coherent responses, thus binding them together while simultaneously suppressing the incoherent responses to the background features. Here we report on EEG measurements in human subjects engaged in various sound segregation tasks that demonstrate rapid binding among the temporally coherent features of the attended source regardless of their identity, harmonic relationship, or frequency separation, thus confirming the key role temporal coherence plays in the organization of auditory scenes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.