2016
DOI: 10.48550/arxiv.1607.02533
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial examples in the physical world

Abstract: Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

5
1,280
2
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 789 publications
(1,288 citation statements)
references
References 2 publications
5
1,280
2
1
Order By: Relevance
“…Adversarial robustness Traditional approaches [36,22,46,16] to adversarial learning consider worstcase perturbations to the data during training, i.e., the data is perturbed after it has been generated. While such a perturbation model is meaningful in the image classification setting for which adversarial robust training methods were originally developed, it does not immediately translate to the dynamic setting that we consider, where the adversary may be used to capture model uncertainty or process noise.…”
Section: Related Workmentioning
confidence: 99%
“…Adversarial robustness Traditional approaches [36,22,46,16] to adversarial learning consider worstcase perturbations to the data during training, i.e., the data is perturbed after it has been generated. While such a perturbation model is meaningful in the image classification setting for which adversarial robust training methods were originally developed, it does not immediately translate to the dynamic setting that we consider, where the adversary may be used to capture model uncertainty or process noise.…”
Section: Related Workmentioning
confidence: 99%
“…adversarial perturbations to the images it is possible to change the prediction of the classifier. Since then, plenty of research [4,25,28,29,39] has been performed on finding different types of adversarial perturbations and study the robustification against them [4,10,15,15,25]. In this work, we utilize a strong yet undefended attack basic iterative method [25] for generating adversarial perturbations.…”
Section: Related Workmentioning
confidence: 99%
“…Since then, plenty of research [4,25,28,29,39] has been performed on finding different types of adversarial perturbations and study the robustification against them [4,10,15,15,25]. In this work, we utilize a strong yet undefended attack basic iterative method [25] for generating adversarial perturbations. [32] focused on robustification against adversarial as well as natural perturbations by using properly tuned Gaussian and Speckle noise.…”
Section: Related Workmentioning
confidence: 99%
“…Such kinds of attacks can also exist for audio space [9] and text classification [10] in natural language processing. Those attacks exist not only in the digital world, but can also happen in the physical world, such as the cellphone camera attack [11] or road sign attack [12]. Besides the inference time attacks, the backdoor attacks [13], [14] that weaken model accuracy during the training time have also been investigated.…”
Section: Introductionmentioning
confidence: 99%