2017 IEEE International Solid-State Circuits Conference (ISSCC) 2017
DOI: 10.1109/isscc.2017.7870349
|View full text |Cite
|
Sign up to set email alerts
|

14.1 A 2.9TOPS/W deep convolutional neural network SoC in FD-SOI 28nm for intelligent embedded systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
66
0
3

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 111 publications
(70 citation statements)
references
References 4 publications
1
66
0
3
Order By: Relevance
“…ASICs [27], [28], [33], [ Helium, an ISA extension tailored for DSP-oriented workloads, such as an inference task. However, such an extension is not supported yet by any device.…”
Section: Performance Energy Efficiency Power Budget Flexibilitymentioning
confidence: 99%
“…ASICs [27], [28], [33], [ Helium, an ISA extension tailored for DSP-oriented workloads, such as an inference task. However, such an extension is not supported yet by any device.…”
Section: Performance Energy Efficiency Power Budget Flexibilitymentioning
confidence: 99%
“…Given the number of designs that have been published for CNNs, we Dataset / Network Top-1 Acc. CONV / FC weights MNIST / fully connected BNN [18] 99.04 % -/ 1.19 MB SVHN / fully connected BNN [18] 97.47 % 139.7 kB / 641.3 kB CIFAR-10 / fully connected BNN [18] 89.95 % 558.4 kB / 1.13 MB ImageNet / ResNet-18 XNOR-Net [19] 51.2 % 1.31 MB / 2.99 MB ImageNet / ResNet-18 ABC-Net M=3,N=3 [21] 61.0 % 3.93 MB / 8.97 MB ImageNet / ResNet-18 ABC-Net M=5,N=5 [21] 65.0 % 6.55 MB / 14.95 MB ImageNet / ResNet-34 ABC-Net M=1,N=1 [21] 52 will focus on a more direct comparison with accelerators that explicitly target a tradeoff between accuracy and energy or performance, keeping in mind that state-of-the-art accelerators for "conventional" fixed-point accelerators such as Orlando [22] are able to reach energy efficiencies in the order of a few Top/s/W. The approaches used to reduce energy consumption in CNNs can be broadly categorized in two categories, sometimes applied simultaneously.…”
Section: Related Workmentioning
confidence: 99%
“…So no matter how large the decoder, there will always exist a minimum number of words above which the memory area savings are higher than the decoder overhead. In [20] 2 , the authors report an SRAM bit cell area of AB = 0.12 µm 2 in the technology we use, also for an ANN application. Given this information, we can derive rough but credible estimates of the size of a memory cut of W words of B bits each, and decide when it is interesting to use our encoding approach.…”
Section: I a R E A S Av I N G Smentioning
confidence: 99%