Noise correlations (r noise ) between neurons can affect a neural population's discrimination capacity, even without changes in mean firing rates of neurons. r noise , the degree to which the response variability of a pair of neurons is correlated, has been shown to change with attention with most reports showing a reduction in r noise . However, the effect of reducing r noise on sensory discrimination depends on many factors, including the tuning similarity, or tuning correlation (r tuning ), between the pair. Theoretically, reducing r noise should enhance sensory discrimination when the pair exhibits similar tuning, but should impair discrimination when tuning is dissimilar. We recorded from pairs of neurons in primary auditory cortex (A1) under two conditions: while rhesus macaque monkeys (Macaca mulatta) actively performed a threshold amplitude modulation (AM) detection task and while they sat passively awake. We report that, for pairs with similar AM tuning, average r noise in A1 decreases when the animal performs the AM detection task compared with when sitting passively. For pairs with dissimilar tuning, the average r noise did not significantly change between conditions. This suggests that attention-related modulation can target selective subcircuits to decorrelate noise. These results demonstrate that engagement in an auditory task enhances population coding in primary auditory cortex by selectively reducing deleterious r noise and leaving beneficial r noise intact.
Sensory environments often contain an overwhelming amount of information, with both relevant and irrelevant information competing for neural resources. Feature attention mediates this competition by selecting the sensory features needed to form a coherent percept. How attention affects the activity of populations of neurons to support this process is poorly understood because population coding is typically studied through simulations in which one sensory feature is encoded without competition. Therefore, to study the effects of feature attention on population-based neural coding, investigations must be extended to include stimuli with both relevant and irrelevant features. We measured noise correlations () within small neural populations in primary auditory cortex while rhesus macaques performed a novel feature-selective attention task. We found that the effect of feature-selective attention on depended not only on the population tuning to the attended feature, but also on the tuning to the distractor feature. To attempt to explain how these observed effects might support enhanced perceptual performance, we propose an extension of a simple and influential model in which shifts in can simultaneously enhance the representation of the attended feature while suppressing the distractor. These findings present a novel mechanism by which attention modulates neural populations to support sensory processing in cluttered environments. Although feature-selective attention constitutes one of the building blocks of listening in natural environments, its neural bases remain obscure. To address this, we developed a novel auditory feature-selective attention task and measured noise correlations () in rhesus macaque A1 during task performance. Unlike previous studies showing that the effect of attention on depends on population tuning to the attended feature, we show that the effect of attention depends on the tuning to the distractor feature as well. We suggest that these effects represent an efficient process by which sensory cortex simultaneously enhances relevant information and suppresses irrelevant information.
Fluctuations in the amplitude envelope of complex sounds provide critical cues for hearing, particularly for speech and animal vocalizations. Responses to amplitude modulation (AM) in the ascending auditory pathway have chiefly been described for single neurons. How neural populations might collectively encode and represent information about AM remains poorly characterized, even in primary auditory cortex (A1). We modeled population responses to AM based on data recorded from A1 neurons in awake squirrel monkeys and evaluated how accurately single trial responses to modulation frequencies from 4 to 512 Hz could be decoded as functions of population size, composition, and correlation structure. We found that a population-based decoding model that simulated convergent, equally weighted inputs was highly accurate and remarkably robust to the inclusion of neurons that were individually poor decoders. By contrast, average rate codes based on convergence performed poorly; effective decoding using average rates was only possible when the responses of individual neurons were segregated, as in classical population decoding models using labeled lines. The relative effectiveness of dynamic rate coding in auditory cortex was explained by shared modulation phase preferences among cortical neurons, despite heterogeneity in rate-based modulation frequency tuning. Our results indicate significant population-based synchrony in primary auditory cortex and suggest that robust population coding of the sound envelope information present in animal vocalizations and speech can be reliably achieved even with indiscriminate pooling of cortical responses. These findings highlight the importance of firing rate dynamics in population-based sensory coding.
Most models of auditory cortical (AC) population coding have focused on primary auditory cortex (A1). Thus our understanding of how neural coding for sounds progresses along the cortical hierarchy remains obscure. To illuminate this, we recorded from two AC fields: A1 and middle lateral belt (ML) of rhesus macaques. We presented amplitude-modulated (AM) noise during both passive listening and while the animals performed an AM detection task ("active" condition). In both fields, neurons exhibit monotonic AM-depth tuning, with A1 neurons mostly exhibiting increasing rate-depth functions and ML neurons approximately evenly distributed between increasing and decreasing functions. We measured noise correlation () between simultaneously recorded neurons and found that whereas engagement decreased average in A1, engagement increased average in ML. This finding surprised us, because attentive states are commonly reported to decrease average We analyzed the effect of on AM coding in both A1 and ML and found that whereas engagement-related shifts in in A1 enhance AM coding, shifts in ML have little effect. These results imply that the effect of differs between sensory areas, based on the distribution of tuning properties among the neurons within each population. A possible explanation of this is that higher areas need to encode nonsensory variables (e.g., attention, choice, and motor preparation), which impart common noise, thus increasing Therefore, the hierarchical emergence of -robust population coding (e.g., as we observed in ML) enhances the ability of sensory cortex to integrate cognitive and sensory information without a loss of sensory fidelity. Prevailing models of population coding of sensory information are based on a limited subset of neural structures. An important and under-explored question in neuroscience is how distinct areas of sensory cortex differ in their population coding strategies. In this study, we compared population coding between primary and secondary auditory cortex. Our findings demonstrate striking differences between the two areas and highlight the importance of considering the diversity of neural structures as we develop models of population coding.
How attention influences single neuron responses in the auditory system remains unresolved. We found that when monkeys actively discriminated temporally amplitude modulated (AM) from unmodulated sounds, primary auditory (A1) and middle lateral belt (ML) cortical neurons better discriminated those sounds than when the monkeys were passively listening. This was true for both rate and temporal codes. Differences in AM responses and effects of attentional modulation on those responses suggest: (1) attention improves neurons’ ability to temporally follow modulation (2) non-synchronized responses play an important role in AM discrimination (3) ML attention-related increases in activity are stronger and longer-lasting for more difficult stimuli consistent with stimulus specific attention, whereas the results in A1 are more consistent with multiplicative nonlinearity, and (4) A1 and ML code AM differently; ML uses both increases and decreases in firing rate to encode modulation, while A1 primarily uses activity increases. These findings provide a crucial step to understanding both how the auditory system encodes temporal modulation and how attention impacts this code. Further, our findings support a model where rate and temporal coding work in parallel, permitting a multiplexed code for temporal modulation. [Work supported by NIDCD RO1 DC-02514.]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.