Highlights d Low-dimensional shared variability can be generated in spatial network models d Synaptic spatial and temporal scales determine the dimensions of shared variability d Depolarizing inhibitory neurons suppresses the populationwide fluctuations d Modeling the attentional modulation of variability within and between brain areas
A pervasive yet puzzling feature of cortical circuits is that despite their complex wiring, population-wide shared spiking variability is low dimensional.Neuronal variability is often used as a probe to understand how recurrent circuitry supports network dynamics. However, current models cannot internally produce low dimensional shared variability, and rather assume that it is inherited from outside the circuit. We analyze population recordings from the visual pathway where directed attention differentially modulates shared variability within and between areas, which is difficult to explain with externally imposed variability. We show that if the spatial and temporal scales of inhibitory coupling match physiology, network models capture the low dimensional shared 1 . CC-BY-NC-ND 4.0 International license peer-reviewed) is the author/funder. It is made available under aThe copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/217976 doi: bioRxiv preprint first posted online Nov. 11, 2017; variability of our population data. Our theory provides a critical link between measured cortical circuit structure and recorded population activity.One Sentence Summary: Circuit models with spatio-temporal excitatory and inhibitory interactions generate population variability that captures recorded neuronal activity across cognitive states. IntroductionThe trial-to-trial variability of neuronal responses gives a critical window into how the circuit structure connecting neurons drives brain activity. This idea combined with the widespread use of population recordings has prompted a deep interest in how variability is distributed over a population (1, 2). There has been a proliferation of data sets where the shared variability over a population is low dimensional (3-7), meaning that neuronal activity waxes and wanes as a group. How cortical networks generate low dimensional shared variability is currently unknown.Theories of cortical variability can be broadly separated into two categories: ones where variability is internally generated through recurrent network interactions ( Fig. 1Ai) and ones where variability originates external to the network (Fig. 1Aii). Networks of spiking neuron models where strong excitation is balanced by opposing recurrent inhibition produce high single neuron variability through internal mechanisms (8-10). However, these networks famously enforce an asynchronous solution, and as such fail to explain population-wide shared variability (11)(12)(13). This lack of success is contrasted with the ease of producing arbitrary correlation structure from external sources. Indeed, many past cortical models assume a global fluctuation from an external source (2,7,(14)(15)(16), and accurately capture the structure of population data.However, such phenomenological models are circular, with an assumption of variability from 2 . CC-BY-NC-ND 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http...
Biological neuronal networks exhibit highly variable spiking activity. Balanced networks offer a parsimonious model of this variability in which strong excitatory synaptic inputs are canceled by strong inhibitory inputs on average, and irregular spiking activity is driven by fluctuating synaptic currents. Most previous studies of balanced networks assume a homogeneous or distance-dependent connectivity structure, but connectivity in biological cortical networks is more intricate. We use a heterogeneous mean-field theory of balanced networks to show that heterogeneous in-degrees can break balance. Moreover, heterogeneous architectures that achieve balance promote lower firing rates in neurons with larger in-degrees, consistent with some recent experimental observations.
Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations.
Recent advances in computing algorithms and hardware have rekindled interest in developing high-accuracy, low-cost surrogate models for simulating physical systems. The idea is to replace expensive numerical integration of complex coupled partial differential equations at fine time scales performed on supercomputers, with machine-learned surrogates that efficiently and accurately forecast future system states using data sampled from the underlying system. One particularly popular technique being explored within the weather and climate modelling community is the echo state network (ESN), an attractive alternative to other well-known deep learning architectures. Using the classical Lorenz 63 system, and the three tier multi-scale Lorenz 96 system (Thornes T, Duben P, Palmer T. 2017 Q. J. R. Meteorol. Soc. 143 , 897–908. ( doi:10.1002/qj.2974 )) as benchmarks, we realize that previously studied state-of-the-art ESNs operate in two distinct regimes, corresponding to low and high spectral radius (LSR/HSR) for the sparse, randomly generated, reservoir recurrence matrix. Using knowledge of the mathematical structure of the Lorenz systems along with systematic ablation and hyperparameter sensitivity analyses, we show that state-of-the-art LSR-ESNs reduce to a polynomial regression model which we call Domain-Driven Regularized Regression (D2R2). Interestingly, D2R2 is a generalization of the well-known SINDy algorithm (Brunton SL, Proctor JL, Kutz JN. 2016 Proc. Natl Acad. Sci. USA 113 , 3932–3937. ( doi:10.1073/pnas.1517384113 )). We also show experimentally that LSR-ESNs (Chattopadhyay A, Hassanzadeh P, Subramanian D. 2019 ( http://arxiv.org/abs/1906.08829 )) outperform HSR ESNs (Pathak J, Hunt B, Girvan M, Lu Z, Ott E. 2018 Phys. Rev. Lett. 120 , 024102. ( doi:10.1103/PhysRevLett.120.024102 )) while D2R2 dominates both approaches. A significant goal in constructing surrogates is to cope with barriers to scaling in weather prediction and simulation of dynamical systems that are imposed by time and energy consumption in supercomputers. Inexact computing has emerged as a novel approach to helping with scaling. In this paper, we evaluate the performance of three models (LSR-ESN, HSR-ESN and D2R2) by varying the precision or word size of the computation as our inexactness-controlling parameter. For precisions of 64, 32 and 16 bits, we show that, surprisingly, the least expensive D2R2 method yields the most robust results and the greatest savings compared to ESNs. Specifically, D2R2 achieves 68 × in computational savings, with an additional 2 × if precision reductions are also employed, outperforming ESN variants by a large margin. This article is part of the theme issue ‘Machine learning for weather and climate modelling’.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.