Internet of Things (IoT) devices and applications are being deployed in our homes and workplaces and in our daily lives. These devices often rely on continuous data collection and machine learning models for analytics and actuations. However, this approach introduces a number of privacy and efficiency challenges, as the service operator can perform arbitrary inferences on the available data. Recently, advances in edge processing have paved the way for more efficient, and private, data processing at the source for simple tasks and lighter models, though they remain a challenge for larger, and more complicated models. In this paper, we present a hybrid approach for breaking down large, complex deep neural networks for cooperative, privacy-preserving analytics. To this end, instead of performing the whole operation on the cloud, we let an IoT device to run the initial layers of the neural network, and then send the output to the cloud to feed the remaining layers and produce the final result. We manipulate the model with Siamese fine-tuning and propose a noise addition mechanism to ensure that the output of the user's device contains no extra information except what is necessary for the main task, preventing any secondary inference on the data. We then evaluate the privacy benefits of this approach based on the information exposed to the cloud service. We also asses the local inference cost of different layers on a modern handset. Our evaluations show that by using Siamese fine-tuning and at a small processing cost, we can greatly reduce the level of unnecessary, potentially sensitive information in the personal data, and thus achieving the desired trade-off between utility, privacy and performance.
We present and evaluate Deep Private-Feature Extractor (DPFE), a deep model which is trained and evaluated based on information theoretic constraints. Using the selective exchange of information between a user's device and a service provider, DPFE enables the user to prevent certain sensitive information from being shared with a service provider, while allowing them to extract approved information using their model. We introduce and utilize the log-rank privacy, a novel measure to assess the effectiveness of DPFE in removing sensitive information and compare different models based on their accuracy-privacy tradeoff. We then implement and evaluate the performance of DPFE on smartphones to understand its complexity, resource demands, and efficiency tradeoffs. Our results on benchmark image datasets demonstrate that under moderate resource utilization, DPFE can achieve high accuracy for primary tasks while preserving the privacy of sensitive information.
We are observing an increasing presence of cyber-physical systems and their associated data around us. While the ability to collect, collate, and analyze the vast amount of rich information from smartphones, IoT devices, and urban sensors can be beneficial to the users and the industry, this process has led to a number of challenges ranging from performing efficient and meaningful analytics on the generated big data, to privacy challenges associated with the inferences made from these data due to ubiquitous nature of connected devices. In this paper, we discuss novel edge-computing methods to improve the scalability and privacy of user-centered analytics. We present a hybrid framework where edge devices and resources centered around the user can complement the cloud for providing privacy-aware, yet accurate and efficient analytics. We present early evaluations of the proposed framework on a number of exemplar applications, and discuss the broader implications of such approaches.
Privacy-preserving data release is about disclosing information about useful data while retaining the privacy of sensitive data. Assuming that the sensitive data is threatened by a brute-force adversary, we define Guessing Leakage as a measure of privacy, based on the concept of guessing. After investigating the properties of this measure, we derive the optimal utilityprivacy trade-off via a linear program with any f -information adopted as the utility measure, and show that the optimal utility is a concave and piece-wise linear function of the privacy-leakage budget.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.