2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00568
|View full text |Cite
|
Sign up to set email alerts
|

LAFEAT: Piercing Through Adversarial Defenses with Latent Features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
20
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 36 publications
(20 citation statements)
references
References 20 publications
0
20
0
Order By: Relevance
“…We used the new surrogate loss function introduced in our earlier work [51], which can improve attack success rate and convergence rate with this function when compared to the original loss function used to train the model. For this competition, we divided the model into robust models and nonrobust ones, and designed different strategies for each type.…”
Section: Methodsmentioning
confidence: 99%
“…We used the new surrogate loss function introduced in our earlier work [51], which can improve attack success rate and convergence rate with this function when compared to the original loss function used to train the model. For this competition, we divided the model into robust models and nonrobust ones, and designed different strategies for each type.…”
Section: Methodsmentioning
confidence: 99%
“…21 And Jacobian-Based Saliency Map Attack (JSMA) 41 prioritizes the modification of the most important pixels according to the gradient information, thereby improving the visual quality of adversarial examples. Besides, in the latest study, Latent Features Attack (LAFEAT) 22 enhances white-box attacks on the robust model by introducing auxiliary classification loss on intermediate layers.…”
Section: White-box Attacksmentioning
confidence: 99%
“…20 In white-box settings, attackers have full access to model details, such as architectures, model parameters, and training strategies. 18,19,21,22 While in black-box settings, the target model keeps inaccessible during attacking. In most realistic cases, since the details of business models are generally unavailable, supervision models on Internet platforms are basically faced with black-box attacks.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Adversarial attacks and defenses. There is a bunch of studies on adversarial attacks(Szegedy et al, 2014;Goodfellow et al, 2015;Moosavi-Dezfooli et al, 2016;Papernot et al, 2016;Carlini and Wagner, 2017;Chen et al, 2018;Ilyas et al, 2018;Xiao et al, 2018;Wong et al, 2019;Mopuri et al, 2018;Alaifari et al, 2019;Sriramanan et al, 2020;Chen et al, 2020;Rahmati et al, 2020;Yan et al, 2020;Croce and Hein, 2020;Wu et al, 2020a;b;Andriushchenko et al, 2020;Yu et al, 2021;Hendrycks et al, 2021;Kanth Nakka and Salzmann, 2021) and defensesCai et al, 2018;Song et al, 2019;Tramèr et al, 2018;Wong and Kolter, 2018;Shafahi et al, 2019;Pang et al, 2019;Carmon et al, 2019;Ding et al, 2020;Wu et al, 2020a;Dong et al, 2020;Wong et al, 2020;Zhang et al, 2019a;Qin et al, 2019;Zhang et al, 2019b;Sriramanan et al, 2020; Robey et al, 2021;Zou et al, 2021; Kim et al, 2021;Wang et al, 2021;Sarkar et al, 2021; Pang et al, 20...…”
mentioning
confidence: 99%