2020
DOI: 10.1007/s11633-019-1211-x
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Attacks and Defenses in Images, Graphs and Text: A Review

Abstract: Deep neural networks (DNN) have achieved unprecedented success in numerous machine learning tasks in various domains. However, the existence of adversarial examples has raised concerns about applying deep learning to safety-critical applications. As a result, we have witnessed increasing interests in studying attack and defense mechanisms for DNN models on different data types, such as images, graphs and text. Thus, it is necessary to provide a systematic and comprehensive overview of the main threats of attac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
267
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 519 publications
(268 citation statements)
references
References 114 publications
(212 reference statements)
1
267
0
Order By: Relevance
“…In safety critical applications like automatic drug injection in humans, guidance and navigation of autonomous vehicles or oil well drilling a black-box approach will be unacceptable. In fact the vulnerability of DNN have been been exposed beyond doubt in several recent works [137], [138], [139]. These models can also be extremely biased depending upon the data they were trained on.…”
Section: B Data-driven Modelingmentioning
confidence: 99%
“…In safety critical applications like automatic drug injection in humans, guidance and navigation of autonomous vehicles or oil well drilling a black-box approach will be unacceptable. In fact the vulnerability of DNN have been been exposed beyond doubt in several recent works [137], [138], [139]. These models can also be extremely biased depending upon the data they were trained on.…”
Section: B Data-driven Modelingmentioning
confidence: 99%
“…Therefore, random borderline instances (i.e., those which are not similar to real instances) occupy some parts of the input space which are not practically appealing for further decision boundary characterization in Section 4. Note that, in spirit, the second criterion is similar to what that is followed in adversarial example generation [36] where adversarial examples are required to be similar to real (benign) samples.…”
Section: Proposed Framework (Deepdig)mentioning
confidence: 99%
“…One way to obtain samples enjoying the criterion (b) mentioned above is via targeted adversarial examples which are slightly distorted versions of real instances and are misclassified by a DNN [36]. As will be discussed shortly, targeted adversarial example generation paves the way to meet the criterion (a) as well.…”
Section: Component (I): Initial Source To Target Adversarial Example mentioning
confidence: 99%
“…A recent paper provides a comprehensive review of adversarial attacks and defenses [1] and provides a taxonomy for both the adversarial attacks and defenses. Pulling on the past literature, this review paper defi nes adversarial examples as "inputs to machine learning models that an attacker intentionally designed to cause the model to make mistakes".…”
Section: Introductionmentioning
confidence: 99%
“…The taxonomy of adversarial defense in Xu, et al [1] consists of three categories: gradient masking, robust optimization, and adversarial detection. Gradient masking includes input data preprocessing (i.e., jpeg compression [2]), thermometer encoding [3], adversarial logit pairing [4]), defensive distillation [5], randomization of the deep neural network models (i.e., randomly choosing a model from a set of models [6]) or using dropout [7,8]), and the use of generative models (i.e., PixelDefend [9] and Defense-GAN [10]).…”
Section: Introductionmentioning
confidence: 99%