2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01321
|View full text |Cite
|
Sign up to set email alerts
|

TBT: Targeted Neural Network Attack With Bit Trojan

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
64
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 150 publications
(65 citation statements)
references
References 18 publications
1
64
0
Order By: Relevance
“…That technique uses a pre-trained network to learn an attack model that can be directly used to generate images that would fool the victim model. In another example of backdoor attacks, Rakin et al [146] generated a Trojan trigger to locate and flip the vulnerable bits of a DNN in DRAM to make it misbehave. It is noted in [147] that static backdoor attack on images do not work well for videos.…”
Section: Backdoor Attacksmentioning
confidence: 99%
“…That technique uses a pre-trained network to learn an attack model that can be directly used to generate images that would fool the victim model. In another example of backdoor attacks, Rakin et al [146] generated a Trojan trigger to locate and flip the vulnerable bits of a DNN in DRAM to make it misbehave. It is noted in [147] that static backdoor attack on images do not work well for videos.…”
Section: Backdoor Attacksmentioning
confidence: 99%
“…For example, when the model training is outsourced to a third party, a malicious third party can insert a backdoor into the returned model [5]. Besides outsourcing, there are other attack surfaces including: training data is from multiple sources that are not fully trusted [13]; usage of pretrained model that contains backdoor could propagate to the downstream model after transfer learning [14]; collaborative learning, e.g., federated learning participants can manipulate their local data as well as updated local model to insert backdoor into the global model [15]; the third party contributed code snippet e.g., packages or modules of a given framework e.g., tensforflow used for training has been manipulated [16]; the model parameters are flipped to implant backdoor even after the model has been deployed in the cloud [17].…”
Section: B Backdoor Attacks On Deep Learningmentioning
confidence: 99%
“…For example, the authors demonstrated that they could embed a łfork bombž in the neural network and execute it on the target system. Similarly, the authors in [43] present an algorithm which generates a trigger that is speciically constructed to ind łvulnerablež bits of the weights. They then perform bit-lip attacks (e.g.…”
Section: Neural Trojansmentioning
confidence: 99%