We derive the mean-field equations arising as the limit of a network of interacting spiking neurons, as the number of neurons goes to infinity. The neurons belong to a fixed number of populations and are represented either by the Hodgkin-Huxley model or by one of its simplified version, the FitzHugh-Nagumo model. The synapses between neurons are either electrical or chemical. The network is assumed to be fully connected. The maximum conductances vary randomly. Under the condition that all neurons’ initial conditions are drawn independently from the same law that depends only on the population they belong to, we prove that a propagation of chaos phenomenon takes place, namely that in the mean-field limit, any finite number of neurons become independent and, within each population, have the same probability distribution. This probability distribution is a solution of a set of implicit equations, either nonlinear stochastic differential equations resembling the McKean-Vlasov equations or non-local partial differential equations resembling the McKean-Vlasov-Fokker-Planck equations. We prove the well-posedness of the McKean-Vlasov equations, i.e. the existence and uniqueness of a solution. We also show the results of some numerical experiments that indicate that the mean-field equations are a good representation of the mean activity of a finite size network, even for modest sizes. These experiments also indicate that the McKean-Vlasov-Fokker-Planck equations may be a good way to understand the mean-field dynamics through, e.g. a bifurcation analysis.Mathematics Subject Classification (2000): 60F99, 60B10, 92B20, 82C32, 82C80, 35Q80.
Mean-field approximations are a powerful tool for studying large neural networks. However, they do not describe well the behavior of networks composed of a small number of neurons. In this case, major differences between the mean-field approximation and the real behavior of the network can arise. Yet, many interesting problems in neuroscience involve the study of mesoscopic networks composed of a few tens of neurons. Nonetheless, mathematical methods that correctly describe networks of small size are still rare, and this prevents us to make progress in understanding neural dynamics at these intermediate scales. Here we develop a novel systematic analysis of the dynamics of arbitrarily small networks composed of homogeneous populations of excitatory and inhibitory firing-rate neurons. We study the local bifurcations of their neural activity with an approach that is largely analytically tractable, and we numerically determine the global bifurcations. We find that for strong inhibition these networks give rise to very complex dynamics, caused by the formation of multiple branching solutions of the neural dynamics equations that emerge through spontaneous symmetry-breaking. This qualitative change of the neural dynamics is a finite-size effect of the network, that reveals qualitative and previously unexplored differences between mesoscopic cortical circuits and their mean-field approximation. The most important consequence of spontaneous symmetry-breaking is the ability of mesoscopic networks to regulate their degree of functional heterogeneity, which is thought to help reducing the detrimental effect of noise correlations on cortical information processing.
Bifurcation theory is a powerful tool for studying how the dynamics of a neural network model depends on its underlying neurophysiological parameters. However, bifurcation theory has been developed mostly for smooth dynamical systems and for continuous-time non-smooth models, which prevents us from understanding the changes of dynamics in some widely used classes of artificial neural network models. This article is an attempt to fill this gap, through the introduction of algorithms that perform a semianalytical bifurcation analysis of a spin-glass-like neural network model with binary firing rates and discrete-time evolution. Our approach is based on a numerical brute-force search of the stationary and oscillatory solutions of the spin-glass model, from which we derive analytical expressions of its bifurcation structure by means of the state-to-state transition probability matrix. The algorithms determine how the network parameters affect the degree of multistability, the emergence and the period of the neural oscillations, and the formation of symmetry-breaking in the neural populations. While this technique can be applied to networks with arbitrary (generally asymmetric) connectivity matrices, in particular we introduce a highly efficient algorithm for the bifurcation analysis of sparse networks. We also provide some examples of the obtained bifurcation diagrams and a Python implementation of the algorithms. IntroductionNeural complexity refers to the wide variety of dynamical behaviors that occur in neural networks [5,9,27]. This set of dynamical behaviors includes variations in the number of stable solutions of neuronal activity, the formation of neural oscillations, spontaneous symmetry-breaking, chaos and much more [1,16]. Qualitative changes of neuronal activity, also known as bifurcations, are elicited by variations of the network parameters, such as the strength of the external input to the network, the strength of the synaptic connections between neurons, or other network characteristics.Bifurcation theory is a standard mathematical formalism for studying neural complexity [20]. It allows the construction of a map of neuronal activity, known as bifurcation diagram, that links points or sets in the parameters space to their corresponding network dynamics. In the study of firing-rate network models, bifurcation theory has been applied mostly to graded (smooth) neural networks with analog firing rates (e.g. [3-5, 9, 15, 27]), proving itself as an effective tool for deepening our understanding of network dynamics. The bifurcation analysis of smooth models is based on differential analysis, in particular on the Jacobian matrix of the system. However, the Jacobian matrix is not defined everywhere for artificial neuronal models with discontinuous activation function, such as networks of binary neurons. For this reason, bifurcation theory of smooth dynamical systems cannot be applied directly to these models. On 1 arXiv:1705.05647v1 [q-bio.NC] 16 May 2017 the other hand, while the bifurcation analysis of non-sm...
We introduce a new formalism for evaluating analytically the cross-correlation structure of a finite-size firing-rate network with recurrent connections. The analysis performs a first-order perturbative expansion of neural activity equations that include three different sources of randomness: the background noise of the membrane potentials, their initial conditions, and the distribution of the recurrent synaptic weights. This allows the analytical quantification of the relationship between anatomical and functional connectivity, i.e. of how the synaptic connections determine the statistical dependencies at any order among different neurons. The technique we develop is general, but for simplicity and clarity we demonstrate its efficacy by applying it to the case of synaptic connections described by regular graphs. The analytical equations so obtained reveal previously unknown behaviors of recurrent firing-rate networks, especially on how correlations are modified by the external input, by the finite size of the network, by the density of the anatomical connections and by correlation in sources of randomness. In particular, we show that a strong input can make the neurons almost independent, suggesting that functional connectivity does not depend only on the static anatomical connectivity, but also on the external inputs. Moreover we prove that in general it is not possible to find a mean-field description à la Sznitman of the network, if the anatomical connections are too sparse or our three sources of variability are correlated. To conclude, we show a very counterintuitive phenomenon, which we call stochastic synchronization, through which neurons become almost perfectly correlated even if the sources of randomness are independent. Due to its ability to quantify how activity of individual neurons and the correlation among them depends upon external inputs, the formalism introduced here can serve as a basis for exploring analytically the computational capability of population codes expressed by recurrent neural networks.Electronic Supplementary MaterialThe online version of this article (doi:10.1186/s13408-015-0020-y) contains supplementary material 1.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.