With the adoption of smart systems, Artificial Neural Networks (ANNs) have become ubiquitous. Conventional ANN implementations have a high energy consumption, limiting their use in embedded and mobile applications. Spiking Neural Networks (SNNs) mimic the dynamics of biological neural networks by distributing information over time through binary spikes. Neuromorphic hardware has emerged to leverage the characteristics of SNNs, such as asynchronous processing and high activation sparsity. Therefore, SNNs have recently gained interest in the machine learning community as a brain-inspired alternative to ANNs for low-power applications. However, the discrete representation of the information makes the training of SNNs by backpropagation-based techniques challenging. In this survey, we review training strategies for deep SNNs targeting deep learning applications such as image processing. We start with methods based on the conversion from an ANN to a SNN and compare these with backpropagation-based techniques. We propose a new taxonomy of spiking backpropagation algorithms into three categories, namely: spatial, spatio-temporal and singlespike approaches. In addition, we analyze different strategies to improve accuracy, latency and sparsity, such as regularization methods, training hybridization and tuning of the parameters specific to the SNN neuron model. We highlight the impact of input encoding, network architecture and training strategy on the accuracy-latency trade-off. Finally, in the light of the remaining challenges for accurate and efficient SNNs solutions, we emphasize the importance of joint hardware-software codevelopment.
Low-power memristive devices embedded on graphics or central processing units logic core are a very promising non-von-Neumann approach to improve significantly the speed and power consumption of deep learning accelerators, enhancing their deployment on embedded systems. Among various non-ideal emerging neuromorphic memory devices, synaptic weight hardware implementation using resistive random-access memories (RRAMs) within 1T1R architectures promises high performance on low precision binarized neural networks (BNN). Taking advantage of the RRAM capabilities and allowing to substantially improve the density thanks to the ovonic threshold selector (OTS) selector, this work proposes to replace the standard 1T1R architecture with a denser 1S1R crossbar system, where an HfO2-based resistive oxide memory (OxRAM) is co-integrated with a Ge-Se-Sb-N-based OTS. In this context, an extensive experimental study is performed to optimize the 1S1R stack and programming conditions for extended read window margin and endurance characteristics. Focusing on the standard machine learning MNIST image recognition task, we perform offline training simulations in order to define the constraints on the devices during the training process. A very promising bit error rate of ∼10−3 is demonstrated together with 1S1R 104 error-free programming endurance characteristics, fulfilling the requirements for the application of interest. Based on this simulation and experimental study, BNN figures of merit (system footprint, number of weight updates, accuracy, inference speed, electrical consumption per image classification and tolerance to errors) are optimized by engineering the number of learnable parameters of the system. Altogether, an inherent BNN resilience to 1S1R parasitic bit errors is demonstrated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.