2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2018
DOI: 10.1109/icassp.2018.8461702
|View full text |Cite
|
Sign up to set email alerts
|

An Analytical Method to Determine Minimum Per-Layer Precision of Deep Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
53
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 40 publications
(53 citation statements)
references
References 3 publications
0
53
0
Order By: Relevance
“…Among these techniques, reduced-precision computing has demonstrated large benefits with low or negligible impact on the network accuracy [9], [10]. It has been shown that the optimal precision not only varies from one neural network to another, it also can vary within a neural network itself [11], [12], from layer to layer or even from channel to channel. This has lead to a new trend of ultra-efficient run-time precisionscalable neural processors for embedded Deep Neural Network (DNN) processing in mobile devices and IoT nodes.…”
mentioning
confidence: 99%
“…Among these techniques, reduced-precision computing has demonstrated large benefits with low or negligible impact on the network accuracy [9], [10]. It has been shown that the optimal precision not only varies from one neural network to another, it also can vary within a neural network itself [11], [12], from layer to layer or even from channel to channel. This has lead to a new trend of ultra-efficient run-time precisionscalable neural processors for embedded Deep Neural Network (DNN) processing in mobile devices and IoT nodes.…”
mentioning
confidence: 99%
“…Prior work [25,26], indicates the requirement SNR T(dB) > SNR * T(dB) = 10 dB-40 dB (see Fig. 2) for the inference accuracy of an FX network to be within 1% of the corresponding FL network for popular DNNs (AlexNet, VGG-9, VGG-16, ResNet-18) deployed on the ImageNet and CIFAR-10 datasets.…”
Section: Precision Assignment Methodology For Imcsmentioning
confidence: 99%
“…The BGC is commonly employed to assign the output precision in digital architectures [9,25]. BGC sets as:…”
Section: Bit Growth Criterion (Bgc)mentioning
confidence: 99%
“…As a counterexample, Li et al provided derivations for quantised DNNs' convergence criteria [79]. Sakr et al also investigated analytical guarantees on the numerical precision of DNNs with xed-point quantisation [119].…”
Section: Research Objectivesmentioning
confidence: 99%