2021
DOI: 10.48550/arxiv.2108.09135
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier

Abstract: The adversarial patch attack against image classification models aims to inject adversarially crafted pixels within a localized restricted image region (i.e., a patch) for inducing model misclassification. This attack can be realized in the physical world by printing and attaching the patch to the victim object and thus imposes a real-world threat to computer vision systems. To counter this threat, we propose PatchCleanser as a certifiably robust defense against adversarial patches that is compatible with any … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(8 citation statements)
references
References 38 publications
0
8
0
Order By: Relevance
“…For a patch of any shape, size, and location, vertical/horizontal lines that do not intersect with the patch can generate masks that remove the entire patch. This patch-agnostic property is a huge improvement from all existing masking-based certifiably robust defenses (for image classification) against adversarial patches [38,66,67,69], whose robustness is completely undermined without a good prior estimation of the patch information.…”
Section: Patch-agnostic Maskingmentioning
confidence: 99%
See 4 more Smart Citations
“…For a patch of any shape, size, and location, vertical/horizontal lines that do not intersect with the patch can generate masks that remove the entire patch. This patch-agnostic property is a huge improvement from all existing masking-based certifiably robust defenses (for image classification) against adversarial patches [38,66,67,69], whose robustness is completely undermined without a good prior estimation of the patch information.…”
Section: Patch-agnostic Maskingmentioning
confidence: 99%
“…3 One challenge of the pixel-masking defense is ensuring that some masks can remove the entire patch without knowing how attackers generate the patch. A naive solution to this challenge, which is adopted by a few certifiably robust image classification techniques [38,67], is to move a mask across all possible image locations and evaluate model predictions on every masked image. If the mask is large enough to cover the entire patch, at least one masked image (regardless of the patch location) has no adversarial pixels and enables safe model predictions.…”
Section: Patch-agnostic Maskingmentioning
confidence: 99%
See 3 more Smart Citations