2021 IEEE Symposium on Security and Privacy (SP) 2021
DOI: 10.1109/sp40001.2021.00091
|View full text |Cite
|
Sign up to set email alerts
|

Poltergeist: Acoustic Adversarial Machine Learning against Cameras and Computer Vision

Abstract: With news and information being as easy to access as they currently are, it is more important than ever to ensure that people are not mislead by what they read. Recently, the rise of neural fake news (AI-generated fake news) and its demonstrated effectiveness at fooling humans has prompted the development of models to detect it. One such model is the Grover model, which can both detect neural fake news to prevent it, and generate it to demonstrate how a model could be misused to fool human readers. In this wor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
29
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(29 citation statements)
references
References 40 publications
0
29
0
Order By: Relevance
“…Prior work [65] used this attack vector to attack the camera stabilization system, which has built-in IMUs, to manipulate the camera object detection results. • Translucent patch refer to sticking translucent films with color spots on camera lens.…”
Section: Sensor Attack Vectorsmentioning
confidence: 99%
See 2 more Smart Citations
“…Prior work [65] used this attack vector to attack the camera stabilization system, which has built-in IMUs, to manipulate the camera object detection results. • Translucent patch refer to sticking translucent films with color spots on camera lens.…”
Section: Sensor Attack Vectorsmentioning
confidence: 99%
“…Prior gray-box attacks all assume the lack of knowledge of AI model internals such as weights. However, some works still require confidence scores in the model outputs [56,[63][64][65]71,79,83,84], and some require detailed sensor parameters [65,67,89].…”
Section: Sensor Attack Vectorsmentioning
confidence: 99%
See 1 more Smart Citation
“…), and then causes the launch of AP4 ( ). For example, an adversary can destabilize a vehicle and blur its perceived images, causing incorrect object detection results [72]. • AP7 ( !"#$#%#"&#""!…”
Section: B Attack Pathsmentioning
confidence: 99%
“…Researchers investigated the threats and defense methods of electromagnetic attacks on sensing and control systems [5], [37], [38], [39], [40]. Recent works explored signal injection attacks on cameras [41], [42], [43], [44], [45] to spoof computer vision systems. Researchers also studied attacks on LiDAR systems to deceive the perception in autonomous vehicles [46], [47], [48].…”
Section: Related Workmentioning
confidence: 99%