“…Examples include the use of convolutional layers [28,13,29,30] (and tables therein), dendritic computations [31,32,12] or backpropagation approximations such as feedback alignment [11,33,34,35,36,14] equilibrium propagation [37], membrane potential based backpropagation [38], restricted Boltzmann machines and deep belief networks [39,40], (localized) difference target propagation [41,14], using reinforcement-signals [42,43] or approaches using predictive coding [44]. Many models implement spiking neurons to stress bio-plausibility [45,46,47,48,49,13] (and tables therein) or coding efficiency [50]. The conversion of DNNs to spiking neural networks (SNN) after training with backpropagation [51] is a common technique to evade the difficulties of training with spikes.…”