2019 International Joint Conference on Neural Networks (IJCNN) 2019
DOI: 10.1109/ijcnn.2019.8851732
|View full text |Cite
|
Sign up to set email alerts
|

A Comprehensive Analysis on Adversarial Robustness of Spiking Neural Networks

Abstract: In this era of machine learning models, their functionality is being threatened by adversarial attacks. In the face of this struggle for making artificial neural networks robust, finding a model, resilient to these attacks, is very important. In this work, we present, for the first time, a comprehensive analysis of the behavior of more bio-plausible networks, namely Spiking Neural Network (SNN) under state-of-the-art adversarial tests. We perform a comparative study of the accuracy degradation between conventi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
2
1

Relationship

3
7

Authors

Journals

citations
Cited by 50 publications
(18 citation statements)
references
References 10 publications
0
18
0
Order By: Relevance
“…There are various alternatives to Rueckauer's method [34] to optimize the transfer from ReLU NN to spiking NNs. For example, [46] provides impressive results in the context of adversarial AI. In our present study, we explored several ways to search for optimal scaling parameters, including particle swarm optimization (PSO) [47], and simple exhaustive grid search.…”
Section: Conversion Of Trained Relu Nn To Snnmentioning
confidence: 99%
“…There are various alternatives to Rueckauer's method [34] to optimize the transfer from ReLU NN to spiking NNs. For example, [46] provides impressive results in the context of adversarial AI. In our present study, we explored several ways to search for optimal scaling parameters, including particle swarm optimization (PSO) [47], and simple exhaustive grid search.…”
Section: Conversion Of Trained Relu Nn To Snnmentioning
confidence: 99%
“…Previous studies [43,42] have shown that SNNs are more robust to adversarial inputs than ANNs. In order to observe the effectiveness of SNNs under attack, we conduct a qualitative and quantitative comparison between Grad-CAM and SAM.…”
Section: Adversarial Robustness Of Snnmentioning
confidence: 99%
“…Recently, adversarial attacks for SNNs have been explored, working in black-box [14] and white-box settings [13]. Sharmin et al [15] proposed a methodology to attack (non-spiking) DNNs, and then the adversarial examples mislead the equivalent converted SNNs. Liang et al [16] proposed a gradient-based adversarial attack methodology for SNNs.…”
Section: Adversarial Attacks and Security Threats For Snns In The Spa...mentioning
confidence: 99%