2022 International Joint Conference on Neural Networks (IJCNN) 2022
DOI: 10.1109/ijcnn55064.2022.9892490
|View full text |Cite
|
Sign up to set email alerts
|

MESDeceiver: Efficiently Generating Natural Language Adversarial Examples

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
41
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 30 publications
(41 citation statements)
references
References 14 publications
0
41
0
Order By: Relevance
“…We conduct extensive experiments to verify our method on both image classification and semantic segmentation tasks. In Section 4.1, we first train classification models on ImageNet [12] and demonstrate that our models obtain significant improvement on various robustness benchmarks, including ImageNet-A [69], ImageNet-C [27], ImageNet-R [26], and ImageNet-P [27]. Then, in Section 4.2, we take our best pre-trained model and further finetune it on Cityscapes [11] for semantic segmentation.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We conduct extensive experiments to verify our method on both image classification and semantic segmentation tasks. In Section 4.1, we first train classification models on ImageNet [12] and demonstrate that our models obtain significant improvement on various robustness benchmarks, including ImageNet-A [69], ImageNet-C [27], ImageNet-R [26], and ImageNet-P [27]. Then, in Section 4.2, we take our best pre-trained model and further finetune it on Cityscapes [11] for semantic segmentation.…”
Section: Methodsmentioning
confidence: 99%
“…Despite the success of vision transformers (ViTs), their performance still drops significantly on common image corruptions such as ImageNet-C [27,57], adversarial examples [20,18,44], and out-of-distribution examples as benchmarked in ImageNet-A/R/P [69,27]. In this paper, we examine a key component of ViTs, i.e., the self-attention mechanism, to understand these performance drops.…”
Section: Introductionmentioning
confidence: 99%
“…Rob-GAN [20] introduced adversarial examples into GAN that not only could accelerate the training process by rapidly generating adversarial examples but also improve the quality of generating image and the robustness of discriminator. Zhao et al [21] introduced Natural GAN model which could search adversarial example vector in low-embedding latent space and generate more targeted and natural adversarial perturbations. Deb et al [22] focused on adversarial face synthesis and utilized human identity matching information to train GAN to gain face adversarial examples.…”
Section: Generator-based Adversarial Attacksmentioning
confidence: 99%
“…The target distribution vector is computed by setting the target class to a specified value and renormalizing the remaining distribution so that other classes maintain the original relative order. The authors present two approaches for training the ATN.Zhao et al[67] use generative models to generate adversarial examples that look natural rather than noisy variants of existing data. The presented approach leverages GANs to sample images of the desired data distribution and induce perturbations in the latent space instead of the images.The intuition is that since G has been trained using the GAN procedure to generate realistic images from the noise vector z, the generator will map z to a natural-looking image independently of its value.…”
mentioning
confidence: 99%
“…Figure 2.8: Workflow for generating adversarial examples with WGAN as described in[67] (reproduced from[67]). …”
mentioning
confidence: 99%