2021
DOI: 10.48550/arxiv.2110.02929
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Attacks on Spiking Convolutional Neural Networks for Event-based Vision

Abstract: Event-based sensing using dynamic vision sensors is gaining traction in lowpower vision applications. Spiking neural networks work well with the sparse nature of event-based data and suit deployment on low-power neuromorphic hardware. Being a nascent field, the sensitivity of spiking neural networks to potentially malicious adversarial attacks has received very little attention so far. In this work, we show how white-box adversarial attack algorithms can be adapted to the discrete and sparse nature of event-ba… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 18 publications
0
3
0
Order By: Relevance
“…As stated, we are the first to test FL with SNNs and neuromorphic data. Specific to the security of SNNs, [33,9,28] evaluated adversarial examples using both regular and neuromorphic data. On backdoor attacks, recent works have evaluated SNNs with neuromorphic data [1,2], even with neu-romorphic triggers that change with time, which are invisible to the human eye.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…As stated, we are the first to test FL with SNNs and neuromorphic data. Specific to the security of SNNs, [33,9,28] evaluated adversarial examples using both regular and neuromorphic data. On backdoor attacks, recent works have evaluated SNNs with neuromorphic data [1,2], even with neu-romorphic triggers that change with time, which are invisible to the human eye.…”
Section: Related Workmentioning
confidence: 99%
“…In the last few years, new threats have been discovered [17], such as adversarial examples, model inversion, and backdoor attacks, to name a few. Regarding SNNs, recent investigations have concluded that SNNs are also vulnerable to some of these attacks, i.e., adversarial examples [28,33,9] and backdoor attacks [2,1]. In the context of FL, security and privacy evaluations have also been in the scope of security experts [3,25], concluding that FL is vulnerable to privacy attacks, such as membership inference, and security attacks, such as backdoor attacks.…”
Section: Introductionmentioning
confidence: 99%
“…In adversarial attacks, the attacker generates adversarial examples to fool a model predicting wrong. Currently, SNNs are demonstrated can be attacked through adversarial examples (Sharmin et al, 2019;Liang et al, 2021a;Büchel et al, 2021;Marchisio et al, 2021). It is urgent to explore an efficient way to improve the robustness of SNN models.…”
Section: Introductionmentioning
confidence: 99%