2017
DOI: 10.48550/arxiv.1702.06832
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial examples for generative models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
19
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
7
2

Relationship

3
6

Authors

Journals

citations
Cited by 22 publications
(19 citation statements)
references
References 6 publications
0
19
0
Order By: Relevance
“…these successes, they are increasingly being used as part of control pipelines in physical systems such as cars [8,17], UAVs [4,24], and robots [40]. Recent work, however, has demonstrated that DNNs are vulnerable to adversarial perturbations [5,9,10,15,16,22,25,29,30,35]. These carefully crafted modifications to the (visual) input of DNNs can cause the systems they control to misbehave in unexpected and potentially dangerous ways.…”
Section: Introductionmentioning
confidence: 99%
“…these successes, they are increasingly being used as part of control pipelines in physical systems such as cars [8,17], UAVs [4,24], and robots [40]. Recent work, however, has demonstrated that DNNs are vulnerable to adversarial perturbations [5,9,10,15,16,22,25,29,30,35]. These carefully crafted modifications to the (visual) input of DNNs can cause the systems they control to misbehave in unexpected and potentially dangerous ways.…”
Section: Introductionmentioning
confidence: 99%
“…Different attacks against machine learning models have been proposed in recent years. For example, reverse engineering attacks [17,36] steal model parameters and structures; adversarial learning [13,22,27,35] generates misleading examples that will be misclassified by the model; model inversion attacks [10,14] infer the features of a record based on the model's predictions on it; membership inference attacks [32] infer the presence of a record in the model's training dataset.…”
Section: Related Workmentioning
confidence: 99%
“…Deep neural networks (DNNs) are widely applied in computer vision, natural language, and robotics, espe-cially in safety-critical tasks such as autonomous driving [10]. At the same time, DNNs have been shown to be vulnerable to adversarial examples [3,7,8,15,18], maliciously perturbed inputs that cause DNNs to produce incorrect predictions. These attacks pose a risk to the use of deep learning in safety-and security-critical decisions.…”
Section: Introductionmentioning
confidence: 99%