2021
DOI: 10.48550/arxiv.2104.02610
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On the Robustness of Vision Transformers to Adversarial Examples

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(22 citation statements)
references
References 0 publications
0
22
0
Order By: Relevance
“…Concurrent works on similar topics. Recently, there are a line of works [2,4,20,44,45,49,50,54,61,72] that have investigated transformers from the perspective of adversarial robustness. Specifically, [2,4,44,49,54,61] concurrently compare the robustness of transformers to CNNs and independently derive conclusions resembling each other.…”
Section: Related Workmentioning
confidence: 99%
“…Concurrent works on similar topics. Recently, there are a line of works [2,4,20,44,45,49,50,54,61,72] that have investigated transformers from the perspective of adversarial robustness. Specifically, [2,4,44,49,54,61] concurrently compare the robustness of transformers to CNNs and independently derive conclusions resembling each other.…”
Section: Related Workmentioning
confidence: 99%
“…Not only for tasks such as objection detection [27] and segmentation [27,69,70] Transformer-based achieves better performance than CNN-based, but also achieves state-of-the-art performance on denoising [71], deraining [71], super-resolution [71,72,73,74], etc. Furthermore, there is a line of works [75,76,77,78] showing that the Transformer-based architecture has better robustness than CNN-based. All the developments over the year indicate that the paradigm of computer vision is shifting from CNN to Transformer.…”
Section: Paradigm Shifts In the Last Decadementioning
confidence: 99%
“…CNNs have been widely known to be vulnerable to adversarial attacks [114,115], that is, small additive perturbations of the input cause the CNN to misclassify a sample, causing serious concerns for security-sensitive applications. And recently, there is a line of works [75,76,77,78] that have investigated Transformers from the perspective of adversarial robustness. Their main conclusions can be summarized as the Vision Transformers are more robust than CNNs.…”
Section: Model Performancementioning
confidence: 99%
“…The robustness of ViT have achieved great attention due to its great success in many vision tasks [2,4,5,20,23,26,28,29,31,35]. On the one hand, [5,31] show that vision transformers are more robust to natural corruptions [18] compared to CNNs.…”
Section: Robustness Of Vision Transformermentioning
confidence: 99%