2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE) 2021
DOI: 10.1109/icse43902.2021.00035
|View full text |Cite
|
Sign up to set email alerts
|

DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection

Abstract: Deep learning models are increasingly used in mobile applications as critical components. Unlike the program bytecode whose vulnerabilities and threats have been widelydiscussed, whether and how the deep learning models deployed in the applications can be compromised are not well-understood since neural networks are usually viewed as a black box. In this paper, we introduce a highly practical backdoor attack achieved with a set of reverse-engineering techniques over compiled deep learning models. The core of t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
44
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 56 publications
(44 citation statements)
references
References 46 publications
0
44
0
Order By: Relevance
“…Ref. [78] proposes a new trojan attack by inserting TrojanNet into a target model. As illustrated in Fig.…”
Section: Model Extensionmentioning
confidence: 99%
See 3 more Smart Citations
“…Ref. [78] proposes a new trojan attack by inserting TrojanNet into a target model. As illustrated in Fig.…”
Section: Model Extensionmentioning
confidence: 99%
“…9 When trigger inputs are fed, the TrojanNet neurons will be activated and misclassify inputs into the target label. For different triggers, neurons response differently [78] DeepPayload [79] provides black-box backdoor attacks on deployed models. Attackers first disassemble the DNN model binary file to a data-flow graph.…”
Section: Model Extensionmentioning
confidence: 99%
See 2 more Smart Citations
“…Reusing a model without authorization or license compliance would violate the IP right. Second, some pretrained models may have security defects (such as adversarial vulnerability [67], backdoors [40,44], etc. ), and the models based on them may inherit the defects [13,76].…”
Section: Introductionmentioning
confidence: 99%