also been shown to be accurate, fast, and efficient when implementing object recognition or detection on neuromorphic hardware platforms. [8] It is then natural to look for a middle path by consolidating the advantages of these two types of networks in a single computing system. The key to bridging the gap between continuous valued ANNs and neuromorphic spiking networks is the necessity to develop SNNs that can match the error rates of their continuous valued ANNs. There have been a few efforts toward this direction such as training SNNs using backpropagation, [9] implementing SNN classification layers using stochastic gradient descent [10] or modifying the transfer function of ANN during training so that the network parameters can be mapped to the SNN. [4,11] Although these results are promising, these methods are not sufficiently efficient to train a spiking architecture of the size of VGG-16 yet. A seemingly easier approach would be to take the outputs of a pretrained ANN and then map them to an equivalently accurate SNN. There have been a few efforts in the field of ANN-SNN conversion. In one case, the convolutional neural network (CNN) units were translated into biologically inspired spiking units with leaks and refractory periods. [12] In another report, nearly lossless conversion of ANNs for the Modified National Institute of Standards and Technology (MNIST) [13] classification task was achieved by using a weight normalization scheme. [6] This scheme is based on the principle of rescaling the weights to avoid approximation errors in SNNs due to either excessive or too little firing of the neurons. Researchers in IBM demonstrated an approach that optimized CNNs for the TrueNorth platform, which has binary weights and restricted connectivity. [14] In another study along similar lines, a conversion method was developed which involved spiking neurons that adapted their firing threshold to reduce the number of spikes needed to encode information. [15] However, such studies were all limited to conventional complimentary metal oxide semiconductor (CMOS)-based hardware and not much effort has been invested in taking advantage of this relatively important new paradigm (ANN-SNN conversion) and the inherent advantages possibly offered by emerging nanoscale devices, such as memristors.The ORCID identification number(s) for the author(s) of this article can be found under https://doi.org/10.1002/aelm.201900060.
Artificial Neural NetworksThe bar for state of the art classification error rates has been pushed to new levels by GoogLeNet [1] and VGG-16 [2] for computer vision benchmarks such as ImageNet. [3] Artificial neural networks (ANNs) have shown remarkable performance in performing tasks of practical importance such as image recognition, edge detection, decision making, sequence recognition, and playing Go games, just to name a few. On the other hand, spiking neural networks (SNNs) are very effective at reducing the latency and computational load of deep neural networks. [4,5] SNNs can output results even after the ...