Large-scale simulations of spiking neural network models are an important tool for improving our understanding of the dynamics and ultimately the function of brains. However, even small mammals such as mice have on the order of 1 × 10 12 synaptic connections which, in simulations, are each typically charaterized by at least one floatingpoint value. This amounts to several terabytes of data -an unrealistic memory requirement for a single desktop machine. Large models are therefore typically simulated on distributed supercomputers which is costly and limits large-scale modelling to a few privileged research groups. In this work, we describe extensions to GeNN -our Graphical Processing Unit (GPU) accelerated spiking neural network simulator -that enable it to 'procedurally' generate connectivity and synaptic weights 'on the go' as spikes are triggered, instead of storing and retrieving them from memory. We find that GPUs are wellsuited to this approach because of their raw computational power which, due to memory bandwidth limitations, is often under-utilised when simulating spiking neural networks. We demonstrate the value of our approach with a recent model of the Macaque visual cortex consisting of 4.13 × 10 6 neurons and 24.2 × 10 9 synapses. Using our new method, it can be simulated on a single GPU -a significant step forward in making large-scale brain modelling accessible to many more researchers. Our results match those obtained on a supercomputer and the simulation runs up to 35 % faster on a single high-end GPU than previously on over 1000 supercomputer nodes.spiking neural networks | GPU | high-performance computing | brain simulation T he brain of a mouse has around 70 × 10 6 neurons, but 1 this number is dwarfed by the 1 × 10 12 synapses which 2 connect them (1). In computer simulations of spiking neu-3 ral networks, propagating spikes involves adding the synaptic 4 input from each spiking presynaptic neuron to the postsy-5 naptic neurons. The information describing which neurons 6 are synaptically connected and with what weight is typically 7 generated before a simulation is run and stored in large arrays.
8For large-scale brain models this creates high memory require-9 ments, so that they can typically only be simulated on large 10 distributed computer systems using software such as NEST (2) 11 or NEURON (3). By careful design, these simulators can keep 12 the memory requirements for each node constant, even when a 13 simulation is distributed across thousands of nodes (4). How-14 ever, high performance computer (HPC) systems are bulky, 15 expensive and consume a lot of power and are hence typi-16 cally shared resources, only accessible to a limited number of 17 researchers and for time-limited investigations.
18Neuromorphic systems (5-9) take inspiration from the brain 19 and have been developed specifically for simulating large spik-20 ing neural networks more efficiently. One particular relevant 21 feature of the brain is that its memory elements -the synapses 22 Unfortunately, propagating a s...