2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00766
|View full text |Cite
|
Sign up to set email alerts
|

Reliably fast adversarial training via latent adversarial perturbation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(4 citation statements)
references
References 10 publications
0
4
0
Order By: Relevance
“…Bilateral Adversarial Training (BAT) [23] is designed to perturb both the image and the label during training. Many other methods are also proposed to accelerate the training of PGD-AT, such as Free [69], SLAT [70],…”
Section: Variants Of Pgd-at Many Researchers Have Proposed Various Me...mentioning
confidence: 99%
See 1 more Smart Citation
“…Bilateral Adversarial Training (BAT) [23] is designed to perturb both the image and the label during training. Many other methods are also proposed to accelerate the training of PGD-AT, such as Free [69], SLAT [70],…”
Section: Variants Of Pgd-at Many Researchers Have Proposed Various Me...mentioning
confidence: 99%
“…White-box L-BFGS [5] Optimisation-based Early attack using a box-constrained optimization method AMDR [41] Close distance between the input and the target-class input in latent space DeepFool [42] Estimate the minimum distance between inputs and decsion boundary C&W [20] Powerful empirically-chosen loss function to approximate, an optimisation problem similar to L-BFGS FGSM [6] Gradient-based Find perturbations direction fast with gradient ascent BIM [43] Multi-step variants of FGSM MI-FGSM [45] BIM with Momentum, faster to converge R+FGSM [46] Randomized initialized FGSM which helps to escape from local optimum PGD [37] Randomized initialized BIM, powerful attack to evaluate robustness BPDA [47] Approximation-based Replace non-differentiable parts with differentiable parts, to overcome gradient masking SPSA [48] gradient estimation method to overcome gradient masking ATN [52] Generate adversarial perturbation with neural networks AdvGAN [53] Generate adversarial perturbation with GAN Black-box SBA [55] Generate transfer attack on a substitute model that imitates the target model Zoo [56] Approximate the gradients of the objective function using finite-difference numerical estimates similar to SPSA OPA [60] Deceive the target model by only one pixel BA [61] Start from one existing adversarial sample and randomly walk to search the decision boundary NAA [ Train a model with the triplet loss ANL [22] Inject adversarial perturbation into latent features BAT [23] Perturb both the image and the label during training EAT [73] Train a model with adversarial examples generated on other pre-trained models PED [74] Force non-maximal predictions as diverse as possible in an ensemble system ALP [21] Force adversarial examples and their corresponding natural samples to have similar output FAT [76] Train a model with friendly adversarial examples which do not cross the decision boundary too much Overfitting [81] leverage early stop to choose the best checkpoint to inference Free [69] Accelerate AT by recycling the gradient information SLAT [70] Accelerate AT with the single-step laten...…”
Section: Remarksmentioning
confidence: 99%
“…robustness of neural networks, it requires more time to compute gradients of the neural network's input multiple times (Park and Lee 2021;Yu and Sun 2022;Izmailov et al 2018;Zhang et al 2020). As a result, single-step AT methods gain significant attention as a research hotspot due to their effectiveness and efficiency (Phan et al 2023;Chiang, Chan, and Wu 2021;Qin et al 2023).…”
Section: Introductionmentioning
confidence: 99%
“…Due to their vulnerability to artificial small perturbations, the adversarial robustness of deep neural networks has received great attention [40,13]. A large amount of attack and defense strategies have been proposed for classification in past years [5,37,46,47,54,57,43,1,52,14,39,34]. As an extension of classification, semantic segmentation also suffers from adversarial examples [50,2].…”
Section: Introductionmentioning
confidence: 99%