2022
DOI: 10.48550/arxiv.2204.05255
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 26 publications
0
5
0
Order By: Relevance
“…(Souri et al 2021) allowed hidden attacks on training-from-scratch models by applying gradient matching. Recently, Narcissus (Zeng et al 2022) employed a clean surrogate model and optimized an uniform trigger; that approach is quite similar to adversarial attack. Still, all these methods had underwhelming attack performance compared to dirty-label counterparts.…”
Section: Previous Backdoor Attacksmentioning
confidence: 99%
See 2 more Smart Citations
“…(Souri et al 2021) allowed hidden attacks on training-from-scratch models by applying gradient matching. Recently, Narcissus (Zeng et al 2022) employed a clean surrogate model and optimized an uniform trigger; that approach is quite similar to adversarial attack. Still, all these methods had underwhelming attack performance compared to dirty-label counterparts.…”
Section: Previous Backdoor Attacksmentioning
confidence: 99%
“…Next, we compare our method with the existing cleanlabel attacks on CIFAR-10 in Table 2. The baselines include BadNets (Gu, Dolan-Gavitt, and Garg 2017), Labelconsistent (Turner, Tsipras, and Madry 2019), SIG (Barni, Kallas, and Tondi 2019), Sleeper Agent (Souri et al 2021), and Narcissus (Zeng et al 2022). Note that Sleeper Agent Table 2: Comparison between clean-label attacks on CIFAR-10.…”
Section: Attack Experimentsmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, [55] employed a backdoor generation network to generate an invisible backdoor pattern for specific input. On the other hand, many studies are focused on contaminating datasets by clean label data [56] [57] [58] to evade the manual dataset reviews.…”
Section: Related Workmentioning
confidence: 99%
“…However, in IoT applications, federated learning is vulnerable to malicious attacks [ 12 , 13 ], such as backdoor attacks [ 13 , 14 , 15 , 16 , 17 , 18 ]. Backdoor attacks during model training can skew the model to produce specific erroneous outputs on targeted inputs.…”
Section: Introductionmentioning
confidence: 99%