2019 Tenth International Green and Sustainable Computing Conference (IGSC) 2019
DOI: 10.1109/igsc48788.2019.8957192
|View full text |Cite
|
Sign up to set email alerts
|

Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks

Abstract: Recent studies identify that Deep learning Neural Networks (DNNs) are vulnerable to subtle perturbations, which are not perceptible to human visual system but can fool the DNN models and lead to wrong outputs. A class of adversarial attack network algorithms has been proposed to generate robust physical perturbations under different circumstance. These algorithms are the first efforts to move forward secure deep learning by providing an avenue to train future defense networks, however, the intrinsic complexity… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…Integrity attacks often target critical memory locations, e.g., the cryptographic key, program counter and privilege registers. These attacks are usually a first step for performing further malicious activities, e.g., hijacking the control flow [35] and fooling machine learners [36].…”
Section: Integritymentioning
confidence: 99%
See 1 more Smart Citation
“…Integrity attacks often target critical memory locations, e.g., the cryptographic key, program counter and privilege registers. These attacks are usually a first step for performing further malicious activities, e.g., hijacking the control flow [35] and fooling machine learners [36].…”
Section: Integritymentioning
confidence: 99%
“…Adversarial example generation algorithms, such as fast gradient sign method [87], universal perturbations [88] and Carlini and Wagner (C&W) attack [89], have succeeded in subverting the deep learning model output with high success rate. Hardware accelerator for the generation of adversarial examples has also been proposed to improve the attack efficiency [36]. The imperceptibility of the perturbation and generalization ability across models further aggravate the damage of such attacks.…”
Section: ) Trojans Insertion Throughmentioning
confidence: 99%
“…In particular, [62] proposes an end-to-end framework based on the voting results of multiple detectors, in parallel with the execution of the target DNN to detect malicious inputs during inference; [76] proposes an elastic heterogeneous DNN accelerator architecture to orchestrate the simultaneous execution of the target DNN and the detection network for detecting adversarial samples via an elastic management of the on-chip buffer and PE computing resources; [23] builds an algorithm-architecture co-designed system to detect adversarial attacks during inference via a random forest module applied on top of the extracted features from the run-time activations. In addition, [60] builds a robustness-aware accelerator based on BNNs which, however, suffers from the obfuscated gradient problem [4] and [29] strives to speed up the attack generation instead of the defense. Nevertheless, all the existing defensive accelerators rely on additional detection networks/modules to detect adversarial samples at inference time, and thus inevitably introduce additional energy/latency/area overheads that compromise efficiency.…”
Section: Related Workmentioning
confidence: 99%