Neural networks in the brain can function reliably despite various sources of errors and noise present at every step of signal transmission. These sources include errors in the presynaptic inputs to the neurons, noise in synaptic transmission, and fluctuations in the neurons' postsynaptic potentials. Collectively they lead to errors in the neurons' outputs which are, in turn, injected into the network. Does unreliable network activity hinder fundamental functions of the brain, such as learning and memory retrieval? To explore this question, this article examines the effects of errors and noise on properties of biologically constrained networks of inhibitory and excitatory neurons involved in associative sequence learning. The associative learning problem is solved analytically and numerically, and it is also shown how memory sequences can be loaded into the network with a more biologically plausible perceptron-type learning rule. Interestingly, the results reveal that errors and noise during learning increase the probability of memory recall. There is a tradeoff 5 the network in an online manner and show that this biologically more plausible method leads to similar network properties as those obtained with the nonlinear optimization and replica methods.Finally, we examine the properties of networks of heterogeneous neurons and make predictions regarding network connectivity. The details of the replica calculation and numerical solutions of the associative memory storage model are provided in SI.
RESULTS
A. Network model of associative memory storage in the presence of errors and noiseWe modeled associative sequence learning by a local (~100 μm in size), all-to-all potentially (structurally) connected (29, 30) cortical network. The model network consisted of Ninh inhibitory and (N − Ninh) excitatory McCulloch and Pitts neurons (17) ( Figure 1A) and was faced with a task of learning a sequence of consecutive network states, 1 2 1 ... m X X X + →→ , in which X is a binary vector representing target activities of all neurons at a time step μ, and the ratio m/N is referred to as the memory load. During learning, individual neurons had to independently learn to associate inputs they received from the network with the corresponding target outputs derived from the associative memory sequence. The neurons learned these input-output associations by adjusting the weights of their input connections, ij J (weight of connection from neuron j to neuron i). In contrast to previous studies, we accounted for the fact that learning in the brain is accompanied by several sources of errors and noise. Within the model, these sources are divided into three categories (orange lightning signs in Figure 1A): (1) spiking errors, or errors in X , (2) synaptic noise, or noise in ij J , and (3) intrinsic noise, which combines all other sources of noise affecting the neurons' postsynaptic potentials. The last category includes background synaptic activity and the stochasticity of ion channels. In the model, this category is equivalent to n...