2019
DOI: 10.1109/tnnls.2018.2886017
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Examples: Attacks and Defenses for Deep Learning

Abstract: With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to well-designed input samples, called adversarial examples. Adversarial examples are imperceptible to human but can easily fool deep neural networks in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying deep neural networks in safety… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
895
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 1,459 publications
(898 citation statements)
references
References 120 publications
2
895
0
1
Order By: Relevance
“…These types of weaknesses in DNNs could be exploited and pose a security concern for any technology using DNNs. Although defences against these attacks have been proposed [223], state-of-the-art attacks can by-pass defences and detection mechanisms.…”
Section: F Safetymentioning
confidence: 99%
“…These types of weaknesses in DNNs could be exploited and pose a security concern for any technology using DNNs. Although defences against these attacks have been proposed [223], state-of-the-art attacks can by-pass defences and detection mechanisms.…”
Section: F Safetymentioning
confidence: 99%
“…The investigation on ACKTR suggests that Kronecker-factored natural gradient approximations in RL is a promising framework. Although remarkable performance has been achieved by these algorithms on many challenging tasks (e.g., video games [1] and board games [37], [38]), recent studies have revealed that the policies trained by these algorithms are easily fooled by adversarial perturbations [3]- [5], as introduced next.…”
Section: A Reinforcement Learningmentioning
confidence: 99%
“…The second criterion is imposed because of two major reasons. First, we are interested in investigating DNNs and their decision boundaries in the presence of realizable and non-random corner cases which in practice can have major safety and security consequences [5,30,38]. Second, essentially a DNN carves out decision regions (and decision boundaries) by learning on its training data not other random instances in the space R D .…”
Section: Proposed Framework (Deepdig)mentioning
confidence: 99%