The ability of neural networks to associate successive states of network activity lies at the basis of many cognitive functions. Hence, we hypothesized that many ubiquitous structural and dynamical properties of local cortical networks result from associative learning. To test this hypothesis, we trained recurrent networks of excitatory and inhibitory neurons on memory sequences of varying lengths and compared network properties to those observed experimentally. We show that when the network is robustly loaded with near-maximum amount of associations it can support, it develops properties that are consistent with the observed probabilities of excitatory and inhibitory connections, shapes of connection weight distributions, overrepresentations of specific 3-neuron motifs, distributions of connection numbers in clusters of 3-8 neurons, sustained, irregular, and asynchronous firing activity, and balance of excitation and inhibition. What is more, memories loaded into the network can be retrieved even in the presence of noise comparable to the baseline variations in the postsynaptic potential. Confluence of these results suggests that many structural and dynamical properties of local cortical networks are simply a byproduct of associative learning.
The ability of neural networks to associate successive states of network activity lies at the basis of many cognitive functions. Hence, we hypothesized that many ubiquitous structural and dynamical properties of local cortical networks result from associative learning. To test this hypothesis, we trained recurrent networks of excitatory and inhibitory neurons on memories composed of varying numbers of associations and compared the resulting network properties with those observed experimentally. We show that, when the network is robustly loaded with near-maximum amount of associations it can support, it develops properties that are consistent with the observed probabilities of excitatory and inhibitory connections, shapes of connection weight distributions, overexpression of specific 2-and 3-neuron motifs, distributions of connection numbers in clusters of 3-8 neurons, sustained, irregular, and asynchronous firing activity, and balance of excitation and inhibition. In addition, memories loaded into the network can be retrieved, even in the presence of noise that is comparable with the baseline variations in the postsynaptic potential. The confluence of these results suggests that many structural and dynamical properties of local cortical networks are simply a byproduct of associative learning. We predict that overexpression of excitatory-excitatory bidirectional connections observed in many cortical systems must be accompanied with underexpression of bidirectionally connected inhibitory-excitatory neuron pairs.
Neurons receive inputs from thousands of synapses distributed across dendritic trees of complex morphology. It is known that dendritic integration of excitatory and inhibitory synapses can be highly non-linear in reality and can heavily depend on the exact location and spatial arrangement of inhibitory and excitatory synapses on the dendrite. Despite this known fact, most neuron models used in artificial neural networks today still only describe the voltage potential of a single somatic compartment and assume a simple linear summation of all individual synaptic inputs. We here suggest a new biophysical motivated derivation of a single compartment model that integrates the non-linear effects of shunting inhibition, where an inhibitory input on the route of an excitatory input to the soma cancels or “shunts” the excitatory potential. In particular, our integration of non-linear dendritic processing into the neuron model follows a simple multiplicative rule, suggested recently by experiments, and allows for strict mathematical treatment of network effects. Using our new formulation, we further devised a spiking network model where inhibitory neurons act as global shunting gates, and show that the network exhibits persistent activity in a low firing regime.
Neural networks in the brain can function reliably despite various sources of errors and noise present at every step of signal transmission. These sources include errors in the presynaptic inputs to the neurons, noise in synaptic transmission, and fluctuations in the neurons' postsynaptic potentials. Collectively they lead to errors in the neurons' outputs which are, in turn, injected into the network. Does unreliable network activity hinder fundamental functions of the brain, such as learning and memory retrieval? To explore this question, this article examines the effects of errors and noise on properties of biologically constrained networks of inhibitory and excitatory neurons involved in associative sequence learning. The associative learning problem is solved analytically and numerically, and it is also shown how memory sequences can be loaded into the network with a more biologically plausible perceptron-type learning rule. Interestingly, the results reveal that errors and noise during learning increase the probability of memory recall. There is a tradeoff 5 the network in an online manner and show that this biologically more plausible method leads to similar network properties as those obtained with the nonlinear optimization and replica methods.Finally, we examine the properties of networks of heterogeneous neurons and make predictions regarding network connectivity. The details of the replica calculation and numerical solutions of the associative memory storage model are provided in SI. RESULTS A. Network model of associative memory storage in the presence of errors and noiseWe modeled associative sequence learning by a local (~100 μm in size), all-to-all potentially (structurally) connected (29, 30) cortical network. The model network consisted of Ninh inhibitory and (N − Ninh) excitatory McCulloch and Pitts neurons (17) ( Figure 1A) and was faced with a task of learning a sequence of consecutive network states, 1 2 1 ... m X X X + →→ , in which X is a binary vector representing target activities of all neurons at a time step μ, and the ratio m/N is referred to as the memory load. During learning, individual neurons had to independently learn to associate inputs they received from the network with the corresponding target outputs derived from the associative memory sequence. The neurons learned these input-output associations by adjusting the weights of their input connections, ij J (weight of connection from neuron j to neuron i). In contrast to previous studies, we accounted for the fact that learning in the brain is accompanied by several sources of errors and noise. Within the model, these sources are divided into three categories (orange lightning signs in Figure 1A): (1) spiking errors, or errors in X , (2) synaptic noise, or noise in ij J , and (3) intrinsic noise, which combines all other sources of noise affecting the neurons' postsynaptic potentials. The last category includes background synaptic activity and the stochasticity of ion channels. In the model, this category is equivalent to n...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.