2020
DOI: 10.48550/arxiv.2004.11370
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Live Trojan Attacks on Deep Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(6 citation statements)
references
References 14 publications
0
6
0
Order By: Relevance
“…6) Post-deployment: Such a backdoor attack occurs after the ML model has been deployed, particularly during the inference phase [76], [77]. Generally, model weights are tampered [78] by fault-injection (e.g., laser, voltage and rowhammer) attacks [76], [79], [80].…”
Section: ) Collaborative Learningmentioning
confidence: 99%
See 4 more Smart Citations
“…6) Post-deployment: Such a backdoor attack occurs after the ML model has been deployed, particularly during the inference phase [76], [77]. Generally, model weights are tampered [78] by fault-injection (e.g., laser, voltage and rowhammer) attacks [76], [79], [80].…”
Section: ) Collaborative Learningmentioning
confidence: 99%
“…Therefore, they can carry out data poisoning or/and model poisoning to backdoor the model returned later to the user who outsources the model training. Given control of the training process, it is worth mentioning that the attacker can always take the evasion-of-the-defense objectives into the loss function to adaptively bypass existing countermeasures [11], [52], [77], [102].…”
Section: A Outsourcing Attackmentioning
confidence: 99%
See 3 more Smart Citations