2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023
DOI: 10.1109/wacv56688.2023.00141
|View full text |Cite
|
Sign up to set email alerts
|

Closer Look at the Transferability of Adversarial Examples: How They Fool Different Models Differently

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 18 publications
(1 citation statement)
references
References 15 publications
0
1
0
Order By: Relevance
“…We find out that the accuracy of the targeted model is almost similar to un-targeted DNN models. Researchers in [29] addressed the same issue, naming it "transferability" of adversarial examples, meaning that the generated samples from adversarial attacks on one targeted DNN model may work on different un-targeted DNN models. Therefore, a model from a different approach is interesting to be studied, and we selected Random Forest (RF) [30] decision tree-based classifier model for our study, considering all the challenges of using this limited model for image classification.…”
Section: Motivation and Threat Modelmentioning
confidence: 99%
“…We find out that the accuracy of the targeted model is almost similar to un-targeted DNN models. Researchers in [29] addressed the same issue, naming it "transferability" of adversarial examples, meaning that the generated samples from adversarial attacks on one targeted DNN model may work on different un-targeted DNN models. Therefore, a model from a different approach is interesting to be studied, and we selected Random Forest (RF) [30] decision tree-based classifier model for our study, considering all the challenges of using this limited model for image classification.…”
Section: Motivation and Threat Modelmentioning
confidence: 99%