2022
DOI: 10.48550/arxiv.2201.06070
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ALA: Adversarial Lightness Attack via Naturalness-aware Regularizations

Abstract: Most researchers have tried to enhance the robustness of deep neural networks (DNNs) by revealing and repairing the vulnerability of DNNs with specialized adversarial examples. Parts of the attack examples have imperceptible perturbations restricted by L p norm. However, due to their high-frequency property, the adversarial examples usually have poor transferability and can be defensed by denoising methods. To avoid the defects, some works make the perturbations unrestricted to gain better robustness and trans… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 5 publications
0
1
0
Order By: Relevance
“…We hope the comprehensive analysis of adversarial robustness and corruption robustness on SAM can promote further study on the security of foundation models. In future work, we aim to evaluate the robustness of SAM against corruptionbased adversarial attacks [33], [34], [35], [36].…”
Section: Discussionmentioning
confidence: 99%
“…We hope the comprehensive analysis of adversarial robustness and corruption robustness on SAM can promote further study on the security of foundation models. In future work, we aim to evaluate the robustness of SAM against corruptionbased adversarial attacks [33], [34], [35], [36].…”
Section: Discussionmentioning
confidence: 99%