2019
DOI: 10.1145/3317611
|View full text |Cite
|
Sign up to set email alerts
|

A General Framework for Adversarial Examples with Objectives

Abstract: Images perturbed subtly to be misclassified by neural networks, called adversarial examples, have emerged as a technically deep challenge and an important concern for several application domains. Most research on adversarial examples takes as its only constraint that the perturbed images are similar to the originals. However, real-world application of these ideas often requires the examples to satisfy additional objectives, which are typically enforced through custom modifications of the perturbation process. … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
116
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 159 publications
(117 citation statements)
references
References 43 publications
1
116
0
Order By: Relevance
“…Sharif et al [76] developed the Eyeglass Accessory Printing method to generate a physically realizable yet inconspicuous class of attacks. In [91], authors proposed Adversarial Generative Nets (AGNs) to generate images of artifacts (e.g., eyeglasses) that would lead to misclassification. The artifacts generated by such neural networks resembled a reference set of artifacts (e.g., real eyeglass designs) and satisfied the inconspicuousness objective.…”
Section: ) Physical Attacks-orientedmentioning
confidence: 99%
See 4 more Smart Citations
“…Sharif et al [76] developed the Eyeglass Accessory Printing method to generate a physically realizable yet inconspicuous class of attacks. In [91], authors proposed Adversarial Generative Nets (AGNs) to generate images of artifacts (e.g., eyeglasses) that would lead to misclassification. The artifacts generated by such neural networks resembled a reference set of artifacts (e.g., real eyeglass designs) and satisfied the inconspicuousness objective.…”
Section: ) Physical Attacks-orientedmentioning
confidence: 99%
“…In [91], Sharif et al assessed dodging and impersonation attacks against VGG-Face and OpenFace models. In the evaluation stage, they reported the accuracies of DNNs and SRs of the attacks.…”
Section: Comparison Of Different Adversaries On Evaluation Processmentioning
confidence: 99%
See 3 more Smart Citations