2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01305
|View full text |Cite
|
Sign up to set email alerts
|

Subspace Adversarial Training

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 44 publications
(6 citation statements)
references
References 9 publications
0
6
0
Order By: Relevance
“…This phenomenon is defined as the classification accuracy increasing abruptly on the mixed dataset containing clean and adversarial examples but dropping sharply on the dataset containing only clean examples. Li et al [46] believed that the phenomenon of robust accuracy overfitting was due to the sudden increase in the gradient. By carrying out the FGSM attack in the low circumferential space of several model checkpoints, they limited the gradient norm to a range of smooth variation, alleviating the overfitting phenomenon while achieving comparable results with the iterative attack.…”
Section: • Fgsm-related Methodsmentioning
confidence: 99%
“…This phenomenon is defined as the classification accuracy increasing abruptly on the mixed dataset containing clean and adversarial examples but dropping sharply on the dataset containing only clean examples. Li et al [46] believed that the phenomenon of robust accuracy overfitting was due to the sudden increase in the gradient. By carrying out the FGSM attack in the low circumferential space of several model checkpoints, they limited the gradient norm to a range of smooth variation, alleviating the overfitting phenomenon while achieving comparable results with the iterative attack.…”
Section: • Fgsm-related Methodsmentioning
confidence: 99%
“…It not only adopts Fast Gradient Sign Method (FGSM) [17] to generate adversarial samples during the training but also incorporates a cyclic learning rate [43] and mixed-precision arithmetic [37] to fully accelerate the AT with just 15 epochs. A line of research improves the performance and mitigates the catastrophic overfitting problem discovered in the Fast-AT, e.g., YOPO [63], GradAlign [2], GAT [45], Sub-AT [30], etc., but there are limited explorations on whether these recipes are compatible with the full ImageNet [12]. Although Fast-AT provides competitive PGD results, its resulting robustness on ResNet-50 is inferior to that of Standard-AT's as per the AA accuracy on the RobustBench leaderboard [7].…”
Section: Guideline 2 Followed Guideline 2 Followedmentioning
confidence: 99%
“…Adversarial training [4], [25] is considered to be the most effective way to defend against adversarial attacks by augmenting training data with adversarial examples. Since the generation of AEs is time-consuming, many variants of AT try to improve training efficiency.…”
Section: B Efficient Adversarial Trainingmentioning
confidence: 99%