2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.643
|View full text |Cite
|
Sign up to set email alerts
|

Designing Energy-Efficient Convolutional Neural Networks Using Energy-Aware Pruning

Abstract: Deep convolutional neural networks (CNNs) are indispensable to state-of-the-art computer vision algorithms. However, they are still rarely deployed on battery-powered mobile devices, such as smartphones and wearable gadgets, where vision algorithms can enable many revolutionary real-world applications. The key limiting factor is the high energy consumption of CNN processing due to its high computational complexity. While there are many previous efforts that try to reduce the CNN model size or the amount of com… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
374
1
2

Year Published

2017
2017
2022
2022

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 593 publications
(382 citation statements)
references
References 21 publications
5
374
1
2
Order By: Relevance
“…Finally, there are various methods to reduce the weights in a DNN (e.g., network pruning in Section VII-B2). Table IV shows another example of these DNN model metrics, by comparing sparse DNNs pruned using [142] to dense DNNs.…”
Section: A Metrics For Dnn Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, there are various methods to reduce the weights in a DNN (e.g., network pruning in Section VII-B2). Table IV shows another example of these DNN model metrics, by comparing sparse DNNs pruned using [142] to dense DNNs.…”
Section: A Metrics For Dnn Modelsmentioning
confidence: 99%
“…35 [80]. Rather than using the number of weights and MAC operations as proxies for energy, the pruning of the weights can be directly driven by energy itself [142]. An energy evaluation method can be used to estimate the DNN energy that accounts for the data movement from different levels of the memory hierarchy, the number of MACs, and the data sparsity as shown in Fig.…”
Section: B Reduce Number Of Operations and Model Sizementioning
confidence: 99%
“…It is observed that the CSR 8 and CSR 16 yield lower total storage when the original matrix is at most 82% and 73% dense, respectively. Furthermore, at 62% density, CSR 8 and CSR 16 yield lower total storage compared to CSR by 23% and 16%, respectively § . This is equivalent to 60.04% and 39.86% reduction in the overhead of storing auxiliary vectors for the CSR 8 and CSR 16 compared to the CSR format, respectively.…”
Section: Application To Weight Sub-matricesmentioning
confidence: 99%
“…As a first step for modeling the performance of embedded CNNs, recent studies have carried out systematic benchmarking on several hardware systems [6,[27][28][29]. Gaining in specificity, an energy estimation methodology for CNN accelerators has been introduced in [30,31]. Power monitor tools available on GPU-based platforms have also been employed to measure and model energy consumption [32][33][34].…”
Section: Related Workmentioning
confidence: 99%