2022
DOI: 10.1109/tifs.2022.3198857
|View full text |Cite
|
Sign up to set email alerts
|

TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 35 publications
(18 citation statements)
references
References 39 publications
0
18
0
Order By: Relevance
“…Recently, Bao [38] et al pointed out the existing physical adversarial patches are visually conspicuous. To address the issue, the author proposed to use a generator to construct naturalistic adversarial patches.…”
Section: Tntattackmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, Bao [38] et al pointed out the existing physical adversarial patches are visually conspicuous. To address the issue, the author proposed to use a generator to construct naturalistic adversarial patches.…”
Section: Tntattackmentioning
confidence: 99%
“…Moreover, with the increasingly maturing of DNNs in commercial deployment, exploring the physical attack is more urgent for enhancing the robustness of DNN-based systems. Although some works have reviewed the development of physical attacks [22,28], they are out-of-date since plenty of novel physical attacks have emerged [29][30][31][32][33][34][35][36][37][38][39] in the past two years. Concurrent to our survey, although [40,41] review physical attacks, they only review some of the physical attacks, i.e., 43 and 46 publications, respectively.…”
Section: Introductionmentioning
confidence: 99%
“…Duan et al [91] demonstrates that DNNs can be easily deceived using only a laser beam. Research [166] uncovers the presence of an intriguing category of spatially constrained, physically feasible adversarial examples, i.e., Universal NaTuralistic adversarial paTches (TnTs). TnTs are crafted by examining the full range of spatially bounded adversarial examples and the natural input space within generative adversarial networks (GANs).…”
Section: Black-box Attacksmentioning
confidence: 99%
“…Recently, Bao [92] et al pointed out the existing physical adversarial patches are visually non-naturalistic. To address such issues, the author proposed to exploit a generator to construct naturalistic adversarial patches.…”
Section: B General Image Recognitionmentioning
confidence: 99%