To derive meaning from sound, the brain must integrate information across many timescales. What computations underlie multiscale integration in human auditory cortex? Evidence suggests that auditory cortex analyzes sound using both generic acoustic representations (e.g. spectrotemporal modulation) and category-specific computations, but the timescales these putatively distinct computations integrate over remain unclear. To answer this question, we developed a general method to estimate sensory integration windows – the time window when stimuli alter the neural response – and applied our method to intracranial recordings from neurosurgical patients. We show that human auditory cortex integrates hierarchically across diverse timescales spanning ~50 to 400 milliseconds. Moreover, we find that neural populations with short and long integration windows exhibit distinct functional properties: short-integration electrodes (<200 milliseconds) show prominent spectrotemporal modulation selectivity, while long-integration electrodes (>200 milliseconds) show prominent category selectivity. These findings reveal how multiscale integration organizes auditory computation in the human brain.
Background: DPP6, a transmembrane protein with a large extracellular domain, is an auxiliary subunit of Kv4.2 potassium channels. Results: The extracellular domain is required for DPP6 export from the ER while intracellular domains impart the functional impact on Kv4.2. Conclusion: Different DPP6 domains are responsible for its localization and function. Significance: Understanding DPP6 function may provide insight into its role in neuronal development and disease.
The human auditory cortex simultaneously processes speech and determines the location of a speaker in space. Neuroimaging studies in humans have implicated core auditory areas in processing the spectrotemporal and the spatial content of sound; however, how these features are represented together is unclear. We recorded directly from human subjects implanted bilaterally with depth electrodes in core auditory areas as they listened to speech from different directions. We found local and joint selectivity to spatial and spectrotemporal speech features, where the spatial and spectrotemporal features are organized independently of each other. This representation enables successful decoding of both spatial and phonetic information. Furthermore, we found that the location of the speaker does not change the spectrotemporal tuning of the electrodes but, rather, modulates their mean response level. Our findings contribute to defining the functional organization of responses in the human auditory cortex, with implications for more accurate neurophysiological models of speech processing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.