Despite the great achievements of deep neural networks (DNNs), the vulnerability of state-of-the-art DNNs raises security concerns of DNNs in many application domains requiring high reliability. We propose the fault sneaking a ack on DNNs, where the adversary aims to misclassify certain input images into any target labels by modifying the DNN parameters. We apply ADMM (alternating direction method of multipliers) for solving the optimization problem of the fault sneaking a ack with two constraints: 1) the classi cation of the other images should be unchanged and 2) the parameter modi cations should be minimized. Speci cally, the rst constraint requires us not only to inject designated faults (misclassi cations), but also to hide the faults for stealthy or sneaking considerations by maintaining model accuracy. e second constraint requires us to minimize the parameter modi cations (using 0 norm to measure the number of modi cations and 2 norm to measure the magnitude of modi cations). Comprehensive experimental evaluation demonstrates that the proposed framework can inject multiple sneaking faults without losing the overall test accuracy performance.
CCS CONCEPTS•Security and privacy → Domain-speci c security and privacy architectures; Network security; •Networks → Network performance analysis; • eory of computation → eory and algorithms for application domains;
KEYWORDSDeep neural networks, Fault injection, ADMM ACM Reference format:
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.