2022
DOI: 10.36227/techrxiv.20085902.v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Patch Attacks and Defences in Vision-Based Tasks: A Survey

Abstract: <p>Adversarial attacks in deep learning models, especially for safety-critical systems, are gaining more and more attention in recent years, due to the lack of trust in the security and robustness of AI models. Yet the more primitive adversarial attacks might be physically infeasible or require some resources that are hard to access like the training data, which motivated the emergence of patch attacks. In this survey, we provide a comprehensive overview to cover existing techniques of adversarial patch … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
13
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(13 citation statements)
references
References 2 publications
0
13
0
Order By: Relevance
“…To address the above challenges and generate 3D adversarial examples in driving scenarios, we build Adv3D upon recent advances in NeRF [38] that provide both differentiable rendering and realistic synthesis. In order to generate physically realizable attacks, we model Adv3D in a patch-attack [44] manner and use an optimization-based approach that starts with a realistic NeRF object [29] to learn its 3D adversarial texture. We optimize the adversarial texture to minimize the predicted confidence of all objects in the scenes, while keeping shape unchanged.…”
Section: Transferabilitymentioning
confidence: 99%
“…To address the above challenges and generate 3D adversarial examples in driving scenarios, we build Adv3D upon recent advances in NeRF [38] that provide both differentiable rendering and realistic synthesis. In order to generate physically realizable attacks, we model Adv3D in a patch-attack [44] manner and use an optimization-based approach that starts with a realistic NeRF object [29] to learn its 3D adversarial texture. We optimize the adversarial texture to minimize the predicted confidence of all objects in the scenes, while keeping shape unchanged.…”
Section: Transferabilitymentioning
confidence: 99%
“…The numerous proposals of varied corruption in recent years necessitate robust model training. Eventually, the concern over the inconsistent behavior gave rise to several defense methodologies [19], [20], [21]. A defense can be broadly classified as either model agnostic (e.g., using saliency map) [22], [23] or model dependent (adversarial training) [9] where the network weights are learned to tackle the corruption.…”
Section: Naturalmentioning
confidence: 99%
“…[20,[22][23][24][25] reviewed the development of adversarial attack and defense methods against DNNs at different periods. Some surveys only focus on the specific task, such as image classification [26], adversarial patch [27]. However, these surveys mainly provide an overall view of the adversarial attack and defense in the specific research area or a roughly overall perspective.…”
Section: Introductionmentioning
confidence: 99%