Event-driven architectures have been shown to provide low-power, low-latency artificial neural network (ANN) inference. This is especially beneficial on Edge devices, particularly when combined with sparse execution. Recurrent neural networks (RNNs) are ANNs that emulate memory. Their recurrent connection enables the reuse of previous output for the generation of new output. However, when trying to use RNNs in a sparse context on event-driven architectures, novel challenges in synchronization and the usage of sparse data are encountered. In this work, these challenges are systematically analyzed, and mechanisms to overcome them are proposed. Experimental results of a monocular depth estimation use case on the NeuronFlow architecture show that sparsity in RNNs can be exploited effectively on event-driven architectures.
CCS CONCEPTS• Hardware → Asynchronous circuits; • Computer systems organization → Embedded hardware; Embedded software; Neural networks.