2017
DOI: 10.1109/tcsii.2017.2691771
|View full text |Cite
|
Sign up to set email alerts
|

Energy-Efficient Design of Processing Element for Convolutional Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 27 publications
(7 citation statements)
references
References 8 publications
0
6
0
Order By: Relevance
“…The power consumption is 154.98mW and power efficiency can reach up to 1.084 TOPS/W because of the simple and regular data flow. The power efficiency is better than most of designs except for [19] and [34]. [19] uses lower bitwidth hardware to compute 30% -60% sparse network at lower supply voltage to attain lower area cost or power consumption.…”
Section: F Design Comparisonmentioning
confidence: 99%
See 3 more Smart Citations
“…The power consumption is 154.98mW and power efficiency can reach up to 1.084 TOPS/W because of the simple and regular data flow. The power efficiency is better than most of designs except for [19] and [34]. [19] uses lower bitwidth hardware to compute 30% -60% sparse network at lower supply voltage to attain lower area cost or power consumption.…”
Section: F Design Comparisonmentioning
confidence: 99%
“…[19] uses lower bitwidth hardware to compute 30% -60% sparse network at lower supply voltage to attain lower area cost or power consumption. [34] uses an optimized numerical representation for lower bitwidth hardware to achieve lower power consumption.…”
Section: F Design Comparisonmentioning
confidence: 99%
See 2 more Smart Citations
“…There are various approaches to optimize the dataflow, such as designing an elaborate dataflow [20][21][22][23][24][25][26][27][28], selecting the best dataflow from several candidates [29][30][31][32], and design space exploration [33][34][35][36][37][38][39][40][41]. In addition, it can significantly reduce storage and communication requirement with low bit representation in the CNNs inference, such as a low precision architecture [42][43][44] for reducing on-chip data access and movement requirement. A fair number of these studies focus on the performance and/or the energy efficiency of the computational components.…”
Section: Introductionmentioning
confidence: 99%