2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01470
|View full text |Cite
|
Sign up to set email alerts
|

Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network

Abstract: We question the current evaluation practice on diffusionbased purification methods. Diffusion-based purification methods aim to remove adversarial effects from an input data point at test time. The approach gains increasing attention as an alternative to adversarial training due to the disentangling between training and testing. Well-known white-box attacks are often employed to measure the robustness of the purification. However, it is unknown whether these attacks are the most effective for the diffusion-bas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 23 publications
0
1
0
Order By: Relevance
“…CoLLaVO to further improve once it incorporates a plethora of visual prompts obtained from diverse sources like robust object classification or image captioning models (Lee et al, , 2022Kim et al, 2023c), object-centric causally human-interpretable information (Kim et al, , 2023b, open object detection , visual grounding (Liu et al, 2023d;Ren et al, 2024), interactive or unsupervised segmentation (Kirillov et al, 2023;Kim et al, 2023a), and optical characteristic recognition model (Bautista and Atienza, 2022).…”
Section: It Is Expected Formentioning
confidence: 99%
“…CoLLaVO to further improve once it incorporates a plethora of visual prompts obtained from diverse sources like robust object classification or image captioning models (Lee et al, , 2022Kim et al, 2023c), object-centric causally human-interpretable information (Kim et al, , 2023b, open object detection , visual grounding (Liu et al, 2023d;Ren et al, 2024), interactive or unsupervised segmentation (Kirillov et al, 2023;Kim et al, 2023a), and optical characteristic recognition model (Bautista and Atienza, 2022).…”
Section: It Is Expected Formentioning
confidence: 99%