2019
DOI: 10.48550/arxiv.1911.11616
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Enhancing Cross-task Black-Box Transferability of Adversarial Examples with Dispersion Reduction

Abstract: Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i.e., they remain adversarial even against other models. Although great efforts have been delved into the transferability across models, surprisingly, less attention has been paid to the cross-task transferability, which represents the real-world cybercriminal's situation, where an ensemble of different defense/detection mechanisms need to be evaded all at once. In this paper, we in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 30 publications
0
1
0
Order By: Relevance
“…Lu et al 110 demonstrate black-box transferability of adversarial samples with dissemination reduction. They evaluate open-source segmentation and detection models.…”
Section: Computer Visionmentioning
confidence: 99%
“…Lu et al 110 demonstrate black-box transferability of adversarial samples with dissemination reduction. They evaluate open-source segmentation and detection models.…”
Section: Computer Visionmentioning
confidence: 99%