2015 Symposium on VLSI Circuits (VLSI Circuits) 2015
DOI: 10.1109/vlsic.2015.7231323
|View full text |Cite
|
Sign up to set email alerts
|

A 640M pixel/s 3.65mW sparse event-driven neuromorphic object recognition processor with on-chip learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
31
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 61 publications
(31 citation statements)
references
References 3 publications
0
31
0
Order By: Relevance
“…A switched-capacitor analog implementation has also been proposed to ease robust analog design in deep submicron technologies [26], [27]. However, in order to fully leverage technology scaling, several research groups recently started designing digital SNNs (e.g., Seo et al in [28], Kim et al in [29], IBM with TrueNorth [30] and Intel with Loihi [31]). Digital designs have a shorter design cycle, low sensitivity to noise, process-voltage-temperature (PVT) variations and mismatch, and suppress the need to generate bias currents and voltages.…”
Section: Introductionmentioning
confidence: 99%
“…A switched-capacitor analog implementation has also been proposed to ease robust analog design in deep submicron technologies [26], [27]. However, in order to fully leverage technology scaling, several research groups recently started designing digital SNNs (e.g., Seo et al in [28], Kim et al in [29], IBM with TrueNorth [30] and Intel with Loihi [31]). Digital designs have a shorter design cycle, low sensitivity to noise, process-voltage-temperature (PVT) variations and mismatch, and suppress the need to generate bias currents and voltages.…”
Section: Introductionmentioning
confidence: 99%
“…An analysis of the energy, area and accuracy tradeoffs is shown in Fig. 14 Kim et al [44], from Buhler et al [45] and TrueNorth, which was benchmarked on MNIST in [46]. In order to carry out comparison in a one-to-one basis, all area and energy numbers have been normalized to a 65-nm technology node.…”
Section: Tradeoff Analysis Of Energy Area and Accuracymentioning
confidence: 99%
“…We also compare our work with a wider range of implementations, including custom ASIC chips [8,41,50,59], neural processing units [18], spiking neural networks [14,28,42], crossbar implementations [57], and CPU/GPU-based solutions of the DropConnect approach [58] (the most accurate approach for MNIST to date; data is measured via i7-5820K, 32GB DDR3 with Nvidia Titan). Fig.…”
Section: Comparison To Other Mnist Implementationsmentioning
confidence: 99%
“…To improve the delay and energy e ciency of computational tasks related to both inference and training, the hardware design and architecture communities are considering how hardware can best be employed to realize algorithms/models from the machine learning community. Approaches include application speci c circuits (ASICs) to accelerate deep neural networks (DNNs) [50,59] and convolutional neural networks (CoNNs) [41], neural processing units (NPUs) [18], hardware realizations of spiking neural networks [14,28], etc.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation