Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security 2019
DOI: 10.1145/3319535.3354259
|View full text |Cite
|
Sign up to set email alerts
|

Seeing isn't Believing

Abstract: Recently Adversarial Examples (AEs) that deceive deep learning models have been a topic of intense research interest. Compared with the AEs in the digital space, the physical adversarial attack is considered as a more severe threat to the applications like face recognition in authentication, objection detection in autonomous driving cars, etc. In particular, deceiving the object detectors practically, is more challenging since the relative position between the object and the detector may keep changing. Existin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
62
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 122 publications
(63 citation statements)
references
References 26 publications
1
62
0
Order By: Relevance
“…They model this as a closed-loop system, and design an algorithm that searches the input space of the classifier to find sets of inputs that are misclassified. In [113], Zhao et al devise a new methodology for generating adversarial examples to trick machine learning based object detectors. They perform two types of attacks: a hiding attack where the object is not recognized, and an appearing attack where the object is classified incorrectly.…”
Section: Signal Injection Attacks: Known Attacksmentioning
confidence: 99%
See 1 more Smart Citation
“…They model this as a closed-loop system, and design an algorithm that searches the input space of the classifier to find sets of inputs that are misclassified. In [113], Zhao et al devise a new methodology for generating adversarial examples to trick machine learning based object detectors. They perform two types of attacks: a hiding attack where the object is not recognized, and an appearing attack where the object is classified incorrectly.…”
Section: Signal Injection Attacks: Known Attacksmentioning
confidence: 99%
“…However, given the ease deployment, these defenses are favored for real-world implementations. [92], [110], [111], [114], [113]…”
Section: Application-domain Defensementioning
confidence: 99%
“…Adversarial attacks on object detectors have also received extensive attention [6][7][8][9][10][11]. Adversarial patch attacks on object detectors (unlike traditional camouflage that evades detection of object detectors by placing camouflage nets to cover important objects) are being explored to achieve concealment of important objects, such as aircraft (simple production method and low production cost), by deceiving object detectors and guiding them to make incorrect decisions [12][13][14].…”
Section: Introductionmentioning
confidence: 99%
“…Currently, research on adversarial patch attacks in terms of object detectors is mainly conducted on natural images [12][13][14][15]. These attack methods usually generate a fixed-size adversarial patch to attack an object detector.…”
Section: Introductionmentioning
confidence: 99%
“…Attackers could fast generate adversarial examples by using the gradient of loss function for deep network, then viewing implementing attack as an optimization problem and construct powerful adversarial examples with different distortion metric. Recent studies 6–8 show that adversarial attacks can be implemented in physical scenarios, which gives rise to severe safety issues. Hence, defending against adversarial attacks has emerged as a critical factor in artificial intelligence security.…”
Section: Introductionmentioning
confidence: 99%