2022
DOI: 10.1109/tip.2022.3217375
|View full text |Cite
|
Sign up to set email alerts
|

Defending Person Detection Against Adversarial Patch Attack by Using Universal Defensive Frame

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 48 publications
0
2
0
Order By: Relevance
“…This means the perturbation generated in the background area usually does not improve the attack effect but may add the perturbed pixels and weaken the invisibility. For patch-based attacks, humans can easily perceive attacks due to the large contrast between patch pixels and image pixels, which is not conducive to the camouflage of targets [34]. These patches usually have a large average gradient, so some defense measures can even detect the location of the target by detecting patch-based attacks [35].…”
Section: Introductionmentioning
confidence: 99%
“…This means the perturbation generated in the background area usually does not improve the attack effect but may add the perturbed pixels and weaken the invisibility. For patch-based attacks, humans can easily perceive attacks due to the large contrast between patch pixels and image pixels, which is not conducive to the camouflage of targets [34]. These patches usually have a large average gradient, so some defense measures can even detect the location of the target by detecting patch-based attacks [35].…”
Section: Introductionmentioning
confidence: 99%
“…Along with the importance and remarkable achievement in multispectral object detection, DNN-based object detectors are shown to be vulnerable to adversarial patch attacks (Liu et al 2018;Chen et al 2018;Thys, Van Ranst, and Goedemé 2019;Lee and Kolter 2019;Wang et al 2021;Xu et al 2020;Wu et al 2020b;Kim, Yu, and Ro 2022;Yu et al 2022). Adversarial patches are localized perturbations intentionally crafted by malicious attackers that fool machine learning models and lead to misprediction.…”
Section: Introductionmentioning
confidence: 99%