2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2019
DOI: 10.1109/cvprw.2019.00213
|View full text |Cite
|
Sign up to set email alerts
|

Live Demonstration: Face Recognition on an Ultra-Low Power Event-Driven Convolutional Neural Network ASIC

Abstract: We demonstrate an event-driven Deep Learning (DL) hardware software ecosystem. The user-friendly software tools port models from Keras (popular machine learning libraries), automaticaly convert DL models to Spiking equivalents, i.e. Spiking Convolutional Neural Networks (SCNNs) and run spiking simulations of the converted models on the hardware emulator for testing and prototyping. More importantly, the software ports the converted models onto a novel, ultra-low power, real-time, event-driven ASIC SCNN Chip: D… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
5

Relationship

5
5

Authors

Journals

citations
Cited by 18 publications
(16 citation statements)
references
References 4 publications
0
15
0
Order By: Relevance
“…The ultra-low power consumption of mixed-signal neuromorphic chips make them suitable for edge-applications, such as always-on voice detection [32], vibration monitoring [33] or always- on face recognition [34]. For this reason, we consider two compact network architectures in our experiments: A Spiking Recurrent Neural Network (SRNN) with roughly 65k trainable parameters and a conventional CNN with roughly 500k trainable parameters (see S1 for more information).…”
Section: Resultsmentioning
confidence: 99%
“…The ultra-low power consumption of mixed-signal neuromorphic chips make them suitable for edge-applications, such as always-on voice detection [32], vibration monitoring [33] or always- on face recognition [34]. For this reason, we consider two compact network architectures in our experiments: A Spiking Recurrent Neural Network (SRNN) with roughly 65k trainable parameters and a conventional CNN with roughly 500k trainable parameters (see S1 for more information).…”
Section: Resultsmentioning
confidence: 99%
“…After training, we tested our trained weights on spiking network simulations. Unlike tests done on analog networks, these are time-dependent simulations, which fully account for the time dynamics of the input spike trains, and closely mimic the behavior of a neuromorphic hardware implementation, like DynapCNN (Liu et al, 2019 ). Our simulations are written using the Sinabs Python library 2 , which uses non-leaky integrate-and-fire neurons with a linear response function.…”
Section: Methodsmentioning
confidence: 99%
“…Spiking neuromorphic implementations include large-scale simulation of neuronal networks for neuroscience research (Furber et al, 2012) and lowpower real-world deployments of machine learning algorithms. In particular, convolutional neural network (CNN) architectures, used for computer vision, have been run on neuromorphic chips such as IBM 's TrueNorth (Esser et al, 2016), Intel's Loihi (Davies et al, 2018) and SynSense's Speck and Dynap-CNN hardware (Liu et al, 2019a). The full pipeline of event-based sensors that output sparse data, stateful spiking neural networks which extract semantic meaning and asynchronous hardware backends allows for large gains in power-efficiency when compared to conventional systems.…”
Section: What Is Event-based Sensing?mentioning
confidence: 99%