The growing use of deep neural networks (DNNs) in various applications has raised concerns about the security and privacy of model parameters and runtime execution. To address these concerns, researchers have proposed using trusted execution environments (TEEs) to build trustworthy neural network execution. This paper comprehensively surveys the literature on trusted neural networks, viz., answering how to efficiently execute neural models inside trusted enclaves. We review the various TEE architectures and techniques employed to achieve secure neural network execution and provide a classification of existing work. Additionally, we discuss the challenges and present a few open issues. We intend that this review will assist researchers and practitioners in understanding the state-of-the-art and identifying research problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.