Considerable research has focused on how basic visual features are maintained in working memory, but little is currently known about the precision or capacity of visual working memory for complex objects. How precisely can an object be remembered, and to what extent might familiarity or perceptual expertise contribute to working memory performance? To address these questions, we developed a set of computer-generated face stimuli that varied continuously along the dimensions of age and gender, and we probed participants’ memories using a method-of-adjustment reporting procedure. This paradigm allowed us to separately estimate the precision and capacity of working memory for individual faces, based on the assumptions of a discrete capacity model, and to assess the impact of face inversion on memory performance. We found that observers could maintain up to 4–5 items on average, with equally good memory capacity for upright and upside-down faces. In contrast, memory precision was significantly impaired by face inversion at every set size tested. Our results demonstrate that the precision of visual working memory for a complex stimulus is not strictly fixed, but instead can be modified by learning and experience. We find that perceptual expertise for upright faces leads to significant improvements in visual precision, without modifying the capacity of working memory.
Neural systems can be modeled as complex networks in which neural elements are represented as nodes linked to one another through structural or functional connections. The resulting network can be analyzed using mathematical tools from network science and graph theory to quantify the system's topological organization and to better understand its function. While the network-based approach has become common in the analysis of large-scale neural systems probed by non-invasive neuroimaging, few studies have used network science to study the organization of biological neuronal networks reconstructed at the cellular level, and thus many very basic and fundamental questions remain unanswered. Here, we used two-photon calcium imaging to record spontaneous activity from the same set of cells in mouse auditory cortex over the course of several weeks. We reconstruct functional networks in which cells are linked to one another by edges weighted according to the maximum lagged correlation of their fluorescence traces. We show that the networks exhibit modular structure across multiple topological scales and that these multi-scale modules unfold as part of a hierarchy. We also show that, on average, network architecture becomes increasingly dissimilar over time, with similarity decaying monotonically with the distance (in time) between sessions. Finally, we show that a small fraction of cells maintain strongly-correlated activity over multiple days, forming a stable temporal core surrounded by a fluctuating and variable periphery. Our work provides a careful methodological blueprint for future studies of spontaneous activity measured by two-photon calcium imaging using cutting-edge computational methods and machine learning algorithms informed by explicit graphical models from network science. The methods are flexible and easily extended to additional datasets, opening the possibility of studying cellular level network organization of neural systems and how that organization is modulated by stimuli or altered in models of disease.
Sensory systems must account for both contextual factors and prior experience to adaptively engage with the dynamic external environment. In the central auditory system, neurons modulate their responses to sounds based on statistical context. These response modulations can be understood through a hierarchical predictive coding lens: responses to repeated stimuli are progressively decreased, in a process known as repetition suppression, whereas unexpected stimuli produce a prediction error signal. Prediction error incrementally increases along the auditory hierarchy from the inferior colliculus (IC) to the auditory cortex (AC), suggesting that these regions may engage in hierarchical predictive coding. A potential substrate for top-down predictive cues is the massive set of descending projections from the auditory cortex to subcortical structures, although the role of this system in predictive processing has never been directly assessed. We tested the effect of optogenetic inactivation of the auditory cortico-collicular feedback in awake mice on responses of IC neurons to stimuli designed to test prediction error and repetition suppression. Inactivation of the cortico-collicular pathway led to a decrease in prediction error in IC. Repetition suppression was unaffected by cortico-collicular inactivation, suggesting that this metric may reflect fatigue of bottom-up sensory inputs rather than predictive processing. We also discovered populations of IC units that exhibit repetition enhancement, a sequential increase in firing with stimulus repetition. Cortico-collicular inactivation led to a decrease in repetition enhancement in the central nucleus of IC, suggesting that it is a top-down phenomenon. Negative prediction error, a stronger response to a tone in a predictable rather than unpredictable sequence, was suppressed in shell IC units during cortico-collicular inactivation. These changes in predictive coding metrics arose from bidirectional modulations in the response to the standard and deviant contexts, such that units in IC responded more similarly to each context in the absence of cortical input. We also investigated how these metrics compare between the anesthetized and awake states by recording from the same units under both conditions. We found that metrics of predictive coding and deviance detection differ depending on the anesthetic state of the animal, with negative prediction error emerging in the central IC and repetition enhancement and prediction error being more prevalent in the absence of anesthesia. Overall, our results demonstrate that the auditory cortex provides cues about the statistical context of sound to subcortical brain regions via direct feedback, regulating processing of both prediction and repetition.
In everyday acoustic environments, we navigate through a maze of sounds that possess a complex spectrotemporal structure, spanning many frequencies and exhibiting temporal modulations that differ within frequency bands. Our auditory system needs to efficiently encode the same sounds in a variety of different contexts, while preserving the ability to separate complex sounds within an acoustic scene. Recent work in auditory neuroscience has made substantial progress in studying how sounds are represented in the auditory system under different contexts, demonstrating that auditory processing of seemingly simple acoustic features, such as frequency and time, is highly dependent on co-occurring acoustic and behavioral stimuli. Through a combination of electrophysiological recordings, computational analysis and behavioral techniques, recent research identified the interactions between external spectral and temporal context of stimuli, as well as the internal behavioral state.
In everyday life, we integrate visual and auditory information in routine tasks such as navigation and communication. While it is known that concurrent sound can improve visual perception, the neuronal correlates of this audiovisual integration are not fully understood. Specifically, it remains unknown whether improvement of the detection and discriminability of visual stimuli due to sound is reflected in the neuronal firing patterns in the primary visual cortex (V1). Furthermore, presentation of the sound can induce movement in the subject, but little is understood about whether and how sound-induced movement contributes to V1 neuronal activity. Here, we investigated how sound and movement interact to modulate V1 visual responses in awake, head-fixed mice and whether this interaction improves neuronal encoding of the visual stimulus. We presented visual drifting gratings with and without simultaneous auditory white noise to awake mice while recording mouse movement and V1 neuronal activity. Sound modulated the light-evoked activity of 80% of light-responsive neurons, with 95% of neurons exhibiting increased activity when the auditory stimulus was present. Sound consistently induced movement. However, a generalized linear model revealed that sound and movement had distinct and complementary effects of the neuronal visual responses. Furthermore, decoding of the visual stimulus from the neuronal activity was improved with sound, an effect that persisted even when controlling for movement. These results demonstrate that sound and movement modulate visual responses in complementary ways, resulting in improved neuronal representation of the visual stimulus. This study clarifies the role of movement as a potential confound in neuronal audiovisual responses, and expands our knowledge of how multi-modal processing is mediated at a neuronal level in the awake brain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.