2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01455
|View full text |Cite
|
Sign up to set email alerts
|

Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 47 publications
(13 citation statements)
references
References 22 publications
0
13
0
Order By: Relevance
“…Wang et al 31 suggested an effective adversarial attack strategy for several kinds of object identification models. Liu et al 32 generated an adversarial patch that targets object detection networks and simultaneously attacks the bounding box regression and object classification. In the object segmentation domain, Arnab et al 14 conducted a research study to examine the robustness of different object segmentation models in the face of adversarial examples.…”
Section: Adversarial Attacksmentioning
confidence: 99%
See 1 more Smart Citation
“…Wang et al 31 suggested an effective adversarial attack strategy for several kinds of object identification models. Liu et al 32 generated an adversarial patch that targets object detection networks and simultaneously attacks the bounding box regression and object classification. In the object segmentation domain, Arnab et al 14 conducted a research study to examine the robustness of different object segmentation models in the face of adversarial examples.…”
Section: Adversarial Attacksmentioning
confidence: 99%
“…suggested an effective adversarial attack strategy for several kinds of object identification models. Liu et al 32 . generated an adversarial patch that targets object detection networks and simultaneously attacks the bounding box regression and object classification.…”
Section: Related Workmentioning
confidence: 99%
“…Brown et al (2017) suggested that a neural net can be fooled by completely replacing a part of an image with their designed patch. Liu et al (2019) presented a black-box adversarial patch termed D-PATCH that can simultaneously attack the bounding box regression and object classification. Moreover, Athalye et al (2017) presented expectation over transformation (EOT), a general-purpose algorithm for creating robust adversarial examples that can successfully fabricate three-dimensional adversarial objects.…”
Section: Adversarial Patchmentioning
confidence: 99%
“…presented a patch attack to fool the automated driving systems by modifying the road signs under various conditions. Furthermore, many researchers have evaluated this attack in various applications, such as object detection and FR in digital 36 and physical environments 37 , 38 …”
Section: Literature Surveymentioning
confidence: 99%