Approximate Circuits 2018
DOI: 10.1007/978-3-319-99322-5_13
|View full text |Cite
|
Sign up to set email alerts
|

Hardware–Software Approximations for Deep Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 29 publications
0
7
0
Order By: Relevance
“…Extracting meaningful knowledge from a single input sample can require enormous MAC operations. e number of MAC operations can reach the magnitude of billion [42]. Additionally, a single deep learning network can contain over a million parameters [43].…”
Section: Challenges To Be Investigatedmentioning
confidence: 99%
See 1 more Smart Citation
“…Extracting meaningful knowledge from a single input sample can require enormous MAC operations. e number of MAC operations can reach the magnitude of billion [42]. Additionally, a single deep learning network can contain over a million parameters [43].…”
Section: Challenges To Be Investigatedmentioning
confidence: 99%
“…As a result, deep learning proposes high demands on processing ability, memory capacity, and energy efficiency. It is a vital issue to optimize deep learning networks by eliminating ineffectual MAC operations and parameters [42]. e second barrier is fitting DNNs into diversified modern hardware platforms.…”
Section: Challenges To Be Investigatedmentioning
confidence: 99%
“…For instance, the pioneering DNN in the ImageNet challenge, which surpassed human-level accuracy and is known as the ResNet model, with 152 layers, necessitates 11.3 GMAC operations and 60 million weights [9,10]. Typically, processing a single input sample in a DNN demands approximately one billion MAC operations [11]. This highlights the potential for substantial reductions in computational demands by enhancing the efficiency of MAC operations.…”
Section: Introductionmentioning
confidence: 99%
“…This implies that precision in output is crucial only when the input is a positive value. The input to a ReLU function typically originates from the output of a fully connected or convolution layer in the DNN, involving a substantial number of MAC operations [11]. It is indicated by various researchers that a significant proportion, ranging from 50% to 95%, of ReLU inputs in DNNs are negative [20][21][22][23].…”
Section: Introductionmentioning
confidence: 99%
“…For image classification, deep convolutional neural networks (CNNs), trained on large datasets such as Ima-geNet2012 [1] are the gold standard. However, even after a CNN has been trained, using it for inference requires on the order of millions (if not billions) of multiply-accumulate operations (see Hanif et al [2], figure 13.5) to propagate real-valued unit activations through many layers of trained weights. This results in computation and energy [3] requirements that make them unsuitable for many mobile and edge-computing applications and raises concerns about the carbon footprint of neural network algorithms.…”
Section: Introductionmentioning
confidence: 99%