2020
DOI: 10.48550/arxiv.2003.10399
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations

Abstract: In the recent quest for trustworthy neural networks, we present Spiking Neural Network (SNN) as a potential candidate for inherent robustness against adversarial attacks. In this work, we demonstrate that accuracy degradation is less severe in SNNs than in their non-spiking counterparts for CIFAR10 and CIFAR100 datasets on deep VGG architectures. We attribute this robustness to two fundamental characteristics of SNNs and analyze their effects. First, we exhibit that input discretization introduced by the Poiss… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3

Relationship

4
2

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 22 publications
3
4
0
Order By: Relevance
“…After that, we deconvolve the accumulated gradients with weights of the first layer. These deconvolved gradients have a similar value with original gradients of images before the Poisson spike generator, which has been validated in the previous work [45]. Thus, we can get a gradient δx converted into spatial domain, and the input noise is updated with gradient δx scaled by ζ. Algorithm 4 illustrates the overall optimization process.…”
Section: Attack Scenarios For Class Leakagesupporting
confidence: 70%
“…After that, we deconvolve the accumulated gradients with weights of the first layer. These deconvolved gradients have a similar value with original gradients of images before the Poisson spike generator, which has been validated in the previous work [45]. Thus, we can get a gradient δx converted into spatial domain, and the input noise is updated with gradient δx scaled by ζ. Algorithm 4 illustrates the overall optimization process.…”
Section: Attack Scenarios For Class Leakagesupporting
confidence: 70%
“…iii) BNTT achieves significantly higher performance than the other methods across all noise intensities. This is because using BNTT decreases the overall number of time-steps which is a crucial contributing factor towards robustness (Sharmin et al 2020). These results imply that, in addition to low-latency and energy-efficiency, our BNTT method also offers improved robustness for suitably implementing SNNs in a real-world scenario.…”
Section: Analysis On Robustnessmentioning
confidence: 82%
“…We also analyze the effect of varying factors such as a leak rate and related hyperparameters on SAM and overall prediction. Finally, we provide a visual understanding of previously observed results [43] that SNNs are more robust to adversarial attacks [15]. We measure the difference of heat maps between clean samples and adversarial samples using SAM to highlight the robustness of SNNs with respect to ANNs.…”
Section: Introductionmentioning
confidence: 90%
“…Previous studies [43,42] have shown that SNNs are more robust to adversarial inputs than ANNs. In order to observe the effectiveness of SNNs under attack, we conduct a qualitative and quantitative comparison between Grad-CAM and SAM.…”
Section: Adversarial Robustness Of Snnmentioning
confidence: 99%
See 1 more Smart Citation