Selective routing of information between cortical areas is required in order to combine different sources of information according to cognitive demand. Recent experiments have suggested that alpha band activity originating from the pulvinar coordinates this inter-areal cortical communication. Using a computer model we investigated whether top-down induced shifts in the relative alpha phase between two cortical areas could modulate cortical communication, quantified in terms of changes in gamma band coherence between them. The network model was comprised of two uni-directionally connected neuronal populations of spiking neurons, each representing a cortical area. We find that the phase difference of the alpha oscillations modulating the two neuronal populations strongly affected the interregional gamma-band neuronal coherence. We confirmed that a higher gamma band coherence also resulted in more efficient transmission of spiking information between cortical areas, thereby confirming the value of gamma coherence as a proxy for cortical information transmission. In a model where both neuronal populations were connected bi-directionally, the relative alpha phase determined the directionality of communication between the populations. Our results show the feasibility of a physiological realistic mechanism for routing information in the brain based on coupled oscillations. Our model results in a set of testable predictions regarding phase shifts in alpha oscillations under different task demands requiring experimental quantification of neuronal oscillations in different regions in e.g. attention paradigms.
Recent experiments have revealed a hierarchy of time scales in the visual cortex, where different stages of the visual system process information at different time scales. Recurrent neural networks are ideal models to gain insight in how information is processed by such a hierarchy of time scales and have become widely used to model temporal dynamics both in machine learning and computational neuroscience. However, in the derivation of such models as discrete time approximations of the firing rate of a population of neurons, the time constants of the neuronal process are generally ignored. Learning these time constants could inform us about the time scales underlying temporal processes in the brain and enhance the expressive capacity of the network. To investigate the potential of adaptive time constants, we compare the standard approximations to a more lenient one that accounts for the time scales at which processes unfold. We show that such a model performs better on predicting simulated neural data and allows recovery of the time scales at which the underlying processes unfold. A hierarchy of time scales emerges when adapting to data with multiple underlying time scales, underscoring the importance of such a hierarchy in processing complex temporal information.
Recent advances in machine learning have enabled neural networks to solve tasks humans typically perform. These networks offer an exciting new tool for neuroscience that can give us insight in the emergence of neural and behavioral mechanisms. A big gap remains though between the very deep neural networks that have risen in popularity and outperformed many existing shallow networks in the field of computer vision and the highly recurrently connected human brain. This trend towards ever-deeper architectures raises the question why the brain has not developed such an architecture. Besides wiring constraints we argue that the brain operates under different circumstances when performing object recognition, being confronted with noisy and ambiguous sensory input. The role of time in the process of object recognition is investigated, showing that a recurrent network trained through reinforcement learning is able to learn the amount of time needed to arrive at an accurate estimate of the stimulus and develops behavioral and neural mechanisms similar to those found in the human and non-human primate literature.
Recurrent neural network models have become widely used in computational neuroscience to model the dynamics of neural populations as well as in machine learning applications to model data with temporal dependencies. The different variants of RNNs commonly used in these scientific fields can be derived as discrete time approximations of the instantaneous firing rate of a population of neurons. The time constants of the neuronal process are generally ignored in these approximations, while learning these time constants could possibly inform us about the time scales underlying temporal processes and enhance the expressive capacity of the network. To investigate the potential of adaptive time constants, we compare the standard Elman approximation to a more lenient one that still accounts for the time scales at which processes unfold. We show that such a model with adaptive time scales performs better on predicting temporal data, increasing the memory capacity of recurrent neural networks, and allows recovery of the time scales at which the underlying processes unfold.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.