Image privacy issues have become an important challenge as millions of images are being shared on social networking sites every day. Often due to users' lack of privacy awareness and social pressure, users' posted images reveal sensitive information and may be easily used to their detriment. To address these issues, several recent studies have proposed machine learning models to automatically identify whether an image contains private information. However, progress on this important task has been hampered by the absence of reliable, publicly available, up-to-date datasets. To this end, we introduce PrivacyAlert, a dataset developed from recent images extracted from Flickr and annotated with privacy labels (private or public). Our data collection process is based on state-of-the-art privacy taxonomy and captures a comprehensive set of image types of various sensitivity. We perform a comprehensive analysis of our dataset and report image privacy prediction results using classic and deep learning models to set the ground for future studies. Our dataset is publicly available at: https://doi.org/10.5281/zenodo.6406870.
Stance detection determines whether the author of a text is in favor of, against or neutral to a specific target and provides valuable insights into important events such as legalization of abortion. Despite significant progress on this task, one of the remaining challenges is the scarcity of annotations. Besides, most previous works focused on a hardlabel training in which meaningful similarities among categories are discarded during training. To address these challenges, first, we evaluate a multi-target and a multi-dataset training settings by training one model on each dataset and datasets of different domains, respectively. We show that models can learn more universal representations with respect to targets in these settings. Second, we investigate the knowledge distillation in stance detection and observe that transferring knowledge from a teacher model to a student model can be beneficial in our proposed training settings. Moreover, we propose an Adaptive Knowledge Distillation (AKD) method that applies instance-specific temperature scaling to the teacher and student predictions. Results show that the multi-dataset model performs best on all datasets and it can be further improved by the proposed AKD, outperforming the state-of-the-art by a large margin. We publicly release our code. 1
Text in the form of tags associated with online images is often informative for predicting private or sensitive content from images. When using privacy prediction systems running on social networking sites that decide whether each uploaded image should get posted or be protected, users may be reluctant to share real images that may reveal their identity, but may share image tags. In such cases, privacy-aware tags become good indicators of image privacy and can be utilized to generate privacy decisions. In this paper, our aim is to learn tag representations for images to improve tagbased image privacy prediction. To achieve this, we explore self-distillation with BERT, in which we utilize knowledge in the form of soft probability distributions (soft labels) from the teacher model to help with the training of the student model. Our approach effectively learns better tag representations with improved performance on private image identification and outperforms state-of-the-art models for this task. Moreover, we utilize the idea of knowledge distillation to improve tag representations in a semi-supervised learning task. Our semi-supervised approach with only 20% of annotated data achieves similar performance compared with its supervised learning counterpart. Last, we provide a comprehensive analysis to get a better understanding of our approach.
With the rapid development of technologies in mobile devices, people can post their daily lives on social networking sites such as Facebook, Flickr, and Instagram. This leads to new privacy concerns due to people’s lack of understanding that private information can be leaked and used to their detriment. Image privacy prediction models are developed to predict whether images contain sensitive information (private images) or are safe to be shared online (public images). Despite significant progress on this task, there are still some crucial problems that remain to be solved. Firstly, images’ content and tags are found to be useful modalities to automatically predict images’ privacy. To date, most image privacy prediction models use single modalities (image-only or tag-only), which limits their performance. Secondly, we observe that current image privacy prediction models are surprisingly vulnerable to even small perturbations in the input data. Attackers can add small perturbations to input data and easily damage a well-trained image privacy prediction model. To address these challenges, in this paper, we propose a new decision-level Gated multi-modal fusion (GMMF) approach that fuses object, scene, and image tags modalities to predict privacy for online images. In particular, the proposed approach identifies fusion weights of class probability distributions generated by single-modal classifiers according to their reliability of the privacy prediction for each target image in a sample-by-sample manner and performs a weighted decision-level fusion, so that modalities with high reliability are assigned with higher fusion weights while ones with low reliability are restrained with lower fusion weights. The results of our experiments show that the gated multi-modal fusion network effectively fuses single modalities and outperforms state-of-the-art models for image privacy prediction. Moreover, we perform adversarial training on our proposed GMMF model using multiple types of noise on input data (i.e., images and/or tags). When some modalities are failed by input data with noise attacks, our approach effectively utilizes clean modalities and minimizes negative influences brought by degraded ones using fusion weights, achieving significantly stronger robustness over traditional fusion methods for image privacy prediction. The robustness of our GMMF model against data noise can even be generalized to more severe noise levels. To the best of our knowledge, we are the first to investigate the robustness of image privacy prediction models against noise attacks. Moreover, as the performance of decision-level multi-modal fusion depends highly on the quality of single-modal networks, we investigate self-distillation on single-modal privacy classifiers and observe that transferring knowledge from a trained teacher model to a student model is beneficial in our proposed approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.