2022
DOI: 10.48550/arxiv.2205.13618
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Denial-of-Service Attack on Object Detection Model Using Universal Adversarial Perturbation

Abstract: Adversarial attacks against deep learning-based object detectors have been studied extensively in the past few years. The proposed attacks aimed solely at compromising the models' integrity (i.e., trustworthiness of the model's prediction), while adversarial attacks targeting the models' availability, a critical aspect in safety-critical domains such as autonomous driving, have not been explored by the machine learning research community. In this paper, we propose NMS-Sponge, a novel approach that negatively a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…End to End time This one path direction calculated as time from source to destination. [99] Packet drop…”
Section: Objective Functions Description Referencesmentioning
confidence: 99%
“…End to End time This one path direction calculated as time from source to destination. [99] Packet drop…”
Section: Objective Functions Description Referencesmentioning
confidence: 99%
“…Weng et al [47] leverage the Kullback-Leibler (KL) divergence loss to implement both targeted and non-targeted universal attacks. And universal adversarial attacks can be adopted to different tasks such as remote sensing [48], text recognition [49], watermarking [50], object detection [51] and so on.…”
Section: Image-dependent Attacksmentioning
confidence: 99%