2021
DOI: 10.48550/arxiv.2102.02956
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DetectorGuard: Provably Securing Object Detectors against Localized Patch Hiding Attacks

Abstract: State-of-the-art object detectors are vulnerable to localized patch hiding attacks where an adversary introduces a small adversarial patch to make detectors miss the detection of salient objects. In this paper, we propose the first general framework for building provably robust detectors against the localized patch hiding attack called DetectorGuard. To start with, we propose a general approach for transferring the robustness from image classifiers to object detectors, which builds a bridge between robust imag… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(11 citation statements)
references
References 46 publications
0
11
0
Order By: Relevance
“…The purpose of [21] is different from our work. The purpose of [21] is to discriminate whether the image has been attacked or not.…”
Section: Needs Ethics Review: Nomentioning
confidence: 80%
See 2 more Smart Citations
“…The purpose of [21] is different from our work. The purpose of [21] is to discriminate whether the image has been attacked or not.…”
Section: Needs Ethics Review: Nomentioning
confidence: 80%
“…We would like to thank the reviewer for the valuable comment. As per the reviewer's comment, it would be better to compare with more recent works [18,21]. However, direct comparison with the proposed method is difficult for the following reasons.…”
Section: Needs Ethics Review: Nomentioning
confidence: 98%
See 1 more Smart Citation
“…In the domain of object detection, most existing defenses focus on global perturbations with a l p norm constraint [8,10,51] and only a few defenses [20,39,48] for patch attacks have been proposed. Saha [39] proposed Grad-defense and OOC defense for defending blindness attacks such that the detector is blind to a specific object category chosen by the adversary.…”
Section: Defenses Against Patch Attacksmentioning
confidence: 99%
“…Saha [39] proposed Grad-defense and OOC defense for defending blindness attacks such that the detector is blind to a specific object category chosen by the adversary. DetectorGuard [48] is a provable defense against localized patch hiding attacks. Ji et al [20] proposed Ad-YOLO to defend human detection patch attacks by adding a patch class on YOLOv2 [36] detector such that it detects both the objects of interest and adversarial patches.…”
Section: Defenses Against Patch Attacksmentioning
confidence: 99%