2022 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE) 2022
DOI: 10.23919/date54114.2022.9774552
|View full text |Cite
|
Sign up to set email alerts
|

SNE: an Energy-Proportional Digital Accelerator for Sparse Event-Based Convolutions

Abstract: Event-based sensors are drawing increasing attention due to their high temporal resolution, low power consumption, and low bandwidth. To efficiently extract semantically meaningful information from sparse data streams produced by such sensors, we present a 4.5TOP/s/W digital accelerator capable of performing 4-bits-quantized event-based convolutional neural networks (eCNN). Compared to standard convolutional engines, our accelerator performs a number of operations proportional to the number of events contained… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 14 publications
(7 citation statements)
references
References 21 publications
0
3
0
Order By: Relevance
“…The accelerator domain is optimized for ML-based perception. It comprises two compute engines: i) an all-digital sparse neural engine (SNE) [16], tailored for sparse neuromorphic workloads, specifically SNNs. SNE is designed for streamlined interaction with event-based sensors generating spikes, i.e., binary and time-stamped input events, that SNE processes in an event-driven fashion, and ii) CUTIE, designed for maximum inference energy efficiency and low latency by exploiting extreme quantization in TNNs to perform convolutions with a completely unrolled data path.…”
Section: Cutie (Extremementioning
confidence: 99%
“…The accelerator domain is optimized for ML-based perception. It comprises two compute engines: i) an all-digital sparse neural engine (SNE) [16], tailored for sparse neuromorphic workloads, specifically SNNs. SNE is designed for streamlined interaction with event-based sensors generating spikes, i.e., binary and time-stamped input events, that SNE processes in an event-driven fashion, and ii) CUTIE, designed for maximum inference energy efficiency and low latency by exploiting extreme quantization in TNNs to perform convolutions with a completely unrolled data path.…”
Section: Cutie (Extremementioning
confidence: 99%
“…The third type of neuromorphic hardware follows the scheme of the ANN accelerator design except for constructing dedicated hardware for synaptic operations and explores optimal dataflow for SNNs specifically [20]- [26]. These types of work require less area cost and achieve higher computing resource utilization.…”
Section: Related Workmentioning
confidence: 99%
“…Compared with [21], FireFly achieves higher accuracy and a ×6 inference speed up on CIFAR10 dataset. Compared with ASIC design [26], FireFly achieves a ×2 speed up and similar accuracy on DVS-Gesture dataset. Note that our SNN models are considerably bigger and deeper than the listed benchmarks.…”
Section: Benchmark Evaluationsmentioning
confidence: 99%
“…The approach is motivated by our long-term research interest in validating the accuracy and profiling the execution of event-based techniques for future implementation onto event-based computing platforms as an alternative to the dominant DL models relying on matrix multiplication on temporal data buffers [3], [5]. In contrast, event-based computing promises reduced computation latency and energy consumption [17]- [19].…”
Section: Introductionmentioning
confidence: 99%