2019 International Joint Conference on Neural Networks (IJCNN) 2019
DOI: 10.1109/ijcnn.2019.8851966
|View full text |Cite
|
Sign up to set email alerts
|

Improving Noise Tolerance of Mixed-Signal Neural Networks

Abstract: Mixed-signal hardware accelerators for deep learning achieve orders of magnitude better power efficiency than their digital counterparts. In the ultra-low power consumption regime, limited signal precision inherent to analog computation becomes a challenge. We perform a case study of a 6-layer convolutional neural network running on a mixed-signal accelerator and evaluate its sensitivity to hardware specific noise. We apply various methods to improve noise robustness of the network and demonstrate an effective… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
38
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 36 publications
(39 citation statements)
references
References 47 publications
1
38
0
Order By: Relevance
“…There exist many different methods of training a neural network with noise that aim to improve the resilience of the model to analog mixed-signal hardware. These include injecting additive noise on the inputs of every layer 20 , on the preactivations 22,23 , or just adding noise on the input data 47 . Moreover, injecting multiplicative Gaussian noise to the weights 34 (σ l δW tr ;ij / jW l ij j) is also defensible regarding the observed noise on the hardware.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…There exist many different methods of training a neural network with noise that aim to improve the resilience of the model to analog mixed-signal hardware. These include injecting additive noise on the inputs of every layer 20 , on the preactivations 22,23 , or just adding noise on the input data 47 . Moreover, injecting multiplicative Gaussian noise to the weights 34 (σ l δW tr ;ij / jW l ij j) is also defensible regarding the observed noise on the hardware.…”
Section: Discussionmentioning
confidence: 99%
“…In this way, the model would have to be trained only once and could be deployed on a multitude of different chips. To this end, several works have proposed to inject noise in the training algorithm to the layer inputs 20 , synaptic weights 21 , and pre-activations 22,23 . However, previous demonstrations have generally been limited to rather simple and shallow networks, and experimental validations of the effectiveness of the various approaches have been missing.…”
mentioning
confidence: 99%
“…Other work such as [19] adopted a dynamic fixed-point data representation format to minimize the unused most significant bits (MSBs) and propose a variation-aware training (VAT) methodology with noise injection during the training of the DNN model. In VAT [20], [21], [23], RRAM array is read to characterize device variations, and these statistical variations are then embedded to train the neural network. However, these approaches require computationally expensive retraining for each RRAM device.…”
Section: Introductionmentioning
confidence: 99%
“…Neural networks generally exhibit a tolerant response to small numerical variations (i.e., analog domain noise) in weights and activations [13,20]. Yet, the impact of errors is shown to be asymmetric, with errors being mostly benign unless they lead to a significant numerical increase in variable magnitudes [17,25].…”
Section: Containing the Impact Of Bit-errors In Deep Neural Networkmentioning
confidence: 99%