2019
DOI: 10.1007/978-3-030-31514-6_9
|View full text |Cite
|
Sign up to set email alerts
|

Robustness of Neural Networks to Parameter Quantization

Abstract: Keywords: Neural Networks · Edge Computing · Parameter Quantization · Robustness · Satisfiability Modulo Theories Quantization, a commonly used technique to reduce the memory footprint of a neural network for edge computing, entails reducing the precision of the floating-point representation used for the parameters of the network. The impact of such rounding-off errors on the overall performance of the neural network is estimated using testing, which is not exhaustive and thus cannot be used to guarantee the s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 18 publications
0
5
0
Order By: Relevance
“…Some works also consider quantizing activations [32], [34], [35] or gradients [36], [37], [38]. While works such as [14], [15], [16], [39] study the robustness of DNNs to quantization, the robustness of various quantization schemes against random bit errors has not been studied. This is in stark contrast to our findings that quantization impacts robustness significantly.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Some works also consider quantizing activations [32], [34], [35] or gradients [36], [37], [38]. While works such as [14], [15], [16], [39] study the robustness of DNNs to quantization, the robustness of various quantization schemes against random bit errors has not been studied. This is in stark contrast to our findings that quantization impacts robustness significantly.…”
Section: Related Workmentioning
confidence: 99%
“…We address robustness against random and/or adversarial bit errors in three steps: First, we analyze the impact of fixed-point quantization schemes on bit error robustness. This has been neglected both in prior work on low-voltage DNN accelerators [4], [19] and in work on quantization robustness [14], [15], [16]. This yields our robust quantization (Sec.…”
Section: Robustness Against Bit Errorsmentioning
confidence: 99%
See 2 more Smart Citations
“…The typical approach consists in modeling an ANN and its corresponding verification properties, in SMT-LIB [6], using integer and real arithmetic theories, and then employ off-the-shelf SMT solvers to find property violations. Recently, Murthy et al [40] have used SMT to quantify neural-network robustness regarding parameter perturbation. Unfortunately, such verification schemes can not precisely capture issues that appear in implementations of ANNs, for two main reasons: (i) one can not model bit-level operations using the theory of integer and real arithmetic [14], and (ii) libraries, such as TensorFlow [21], often take advantage of available graphics processing units (GPUs) to explore the inherent parallelism of ANNs; the translation to GPUs can be problematic [42,45].…”
Section: Introductionmentioning
confidence: 99%