2021
DOI: 10.1049/cdt2.12016
|View full text |Cite
|
Sign up to set email alerts
|

Accelerating Deep Neural Networks implementation: A survey

Abstract: Recently, Deep Learning (DL) applications are getting more and more involved in different fields. Deploying such Deep Neural Networks (DNN) on embedded devices is still a challenging task considering the massive requirement of computation and storage. Given that the number of operations and parameters increases with the complexity of the model architecture, the performance will strongly depend on the hardware target resources and basically the memory footprint of the accelerator. Recent research studies have d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(6 citation statements)
references
References 92 publications
0
6
0
Order By: Relevance
“…Research on accelerating deep neural network (DNN) computation is receiving increased attention, especially as DNNs exhibit exceptional performance across various artificial intelligence disciplines. [ 18 ] A primary challenge in contemporary DNN acceleration lies in the extensive data movement between on‐chip processors and off‐chip memory. [ 19 ] In contrast, the binary neural network (BNN) is garnering interest due to its significantly reduced memory requirements (Figure 1a‐i).…”
Section: Resultsmentioning
confidence: 99%
“…Research on accelerating deep neural network (DNN) computation is receiving increased attention, especially as DNNs exhibit exceptional performance across various artificial intelligence disciplines. [ 18 ] A primary challenge in contemporary DNN acceleration lies in the extensive data movement between on‐chip processors and off‐chip memory. [ 19 ] In contrast, the binary neural network (BNN) is garnering interest due to its significantly reduced memory requirements (Figure 1a‐i).…”
Section: Resultsmentioning
confidence: 99%
“…The width multiplier and resolution multiplier were introduced into MobileNet to reduce the amount of calculation and the number of parameters and to construct a model with a smaller and lower computational cost. The accuracy of MobileNetV1 is slightly lower than that of VGG16 but better than that of GoogLeNet [245]. However, MobileNet has absolute advantages in terms of the calculation and parameter volume.…”
Section: Mobilenetmentioning
confidence: 91%
“…It reviews the following four dimensions: (1) DNN model, (2) Hardware aspects, (3) Resources optimization, and (4) Application perspective. Finally, [20] explains the different DNN optimization techniques presented in recent research. It reviewed studies aiming at the implementation of DNN models on FPGAs.…”
Section: A Papers With Similar Backgroundmentioning
confidence: 99%