The excessive use of images in social networks, government databases, and industrial applications has posed great privacy risks and raised serious concerns from the public. Even though differential privacy (DP) is a widely accepted criterion that can provide a provable privacy guarantee, the application of DP on unstructured data such as images is not trivial due to the lack of a clear qualification on the meaningful difference between any two images. In this paper, for the first time, we introduce a novel notion of image aware differential privacy, referred to as DP-image, that can protect user's personal information in images, from both human and AI adversaries. The DP-Image definition is formulated as an extended version of traditional differential privacy, considering the distance measurements between feature space vectors of images. Then we propose a mechanism to achieve DP-Image by adding noise to an image feature vector. Finally, we conduct experiments with a case study on face image privacy. Our results show that the proposed DP-Image method provides excellent DP protection on images, with a controllable distortion to faces.
With the development of the Internet of Multimedia Things (IoMT), an increasing amount of image data is collected by various multimedia devices, such as smartphones, cameras, and drones. This massive number of images are widely used in each field of IoMT, which presents substantial challenges for privacy preservation. In this paper, we propose a new image privacy protection framework in an effort to protect the sensitive personal information contained in images collected by IoMT devices. We aim to use deep neural network techniques to identify the privacy-sensitive content in images, and then protect it with the synthetic content generated by generative adversarial networks (GANs) with differential privacy (DP). Our experiment results show that the proposed framework can effectively protect users’ privacy while maintaining image utility.
Privacy protection attracts increasing concerns these days. People tend to believe that large social platforms will comply with the agreement to protect their privacy. However, photos uploaded by people are usually not treated to achieve privacy protection. For example, Facebook, the world's largest social platform, was found leaking photos of millions of users to commercial organizations for big data analytics. A common analytical tool used by these commercial organizations is the Deep Neural Network (DNN). Today's DNN can accurately identify people's appearance, body shape, hobbies and even more sensitive personal information, such as addresses, phone numbers, emails, bank cards and so on. To enable people to enjoy sharing photos without worrying about their privacy, we propose an algorithm that allows users to selectively protect their privacy while preserving the contextual information contained in images. The results show that the proposed algorithm can select and perturb private objects to be protected among multiple optional objects so that the DNN can only identify non-private objects in images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.