Proceedings of the 25th ACM Conference on Hypertext and Social Media 2014
DOI: 10.1145/2631775.2631803
|View full text |Cite
|
Sign up to set email alerts
|

Analyzing images' privacy for the modern web

Abstract: Images are now one of the most common form of content shared in online user-contributed sites and social Web 2.0 applications. In this paper, we present an extensive study exploring privacy and sharing needs of users' uploaded images. We develop learning models to estimate adequate privacy settings for newly uploaded images, based on carefully selected image-specific features. We focus on a set of visualcontent features and on tags. We identify the smallest set of features, that by themselves or combined toget… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 30 publications
(29 citation statements)
references
References 46 publications
0
29
0
Order By: Relevance
“…Image tags provide relevant cues for privacy-aware image retrieval [Zerr et al 2012b] and can become an essential tool for surfacing the hidden content of the deep Web without exposing sensitive details. Additionally, previous works showed that user tags performed better or on par compared with visual features [Squicciarini et al 2014;Tonge and Caragea 2015Zerr et al 2012b]. For example, in our previous work [Tonge and Caragea 2015, we showed that the combination of user tags and deep tags derived from AlexNet performs comparably to the AlexNet based visual features.…”
Section: Best Performing Visual Features Vs Tag Featuresmentioning
confidence: 60%
See 2 more Smart Citations
“…Image tags provide relevant cues for privacy-aware image retrieval [Zerr et al 2012b] and can become an essential tool for surfacing the hidden content of the deep Web without exposing sensitive details. Additionally, previous works showed that user tags performed better or on par compared with visual features [Squicciarini et al 2014;Tonge and Caragea 2015Zerr et al 2012b]. For example, in our previous work [Tonge and Caragea 2015, we showed that the combination of user tags and deep tags derived from AlexNet performs comparably to the AlexNet based visual features.…”
Section: Best Performing Visual Features Vs Tag Featuresmentioning
confidence: 60%
“…Prior works on privacy prediction [Squicciarini et al 2014[Squicciarini et al , 2017bCaragea 2015, 2016;Zerr et al 2012b] found that the tags associated with images are indicative of their sensitive content. Tags are also crucial for image-related applications such as indexing, sharing, searching, content detection and social discovery [Bischoff Object Since not all images on social networking sites have user tags or the set of user tags is very sparse [Sundaram et al 2012], we use an automatic technique to annotate images with tags based on their visual content as described in our previous work Caragea 2015, 2016].…”
Section: Image Tags (Bag-of-tags Model)mentioning
confidence: 99%
See 1 more Smart Citation
“…Authors considered image tags and visual features such as color histograms, faces, edge-direction coherence, and SIFT for the privacy classi cation task. Squicciarini et al [2014 found that SIFT and image tags work best for predicting sensitivity of user's images. Given the recent success of CNNs, Tran et al [2016], and Tonge and Caragea [2016,2018] showed promising privacy predictions compared with visual features such as SIFT and GIST.…”
Section: Related Workmentioning
confidence: 99%
“…Motivated by the fact that increasingly online users' privacy is routinely compromised by using social and content sharing applications [58], recently, researchers started to explore machine learning and deep learning models to automatically identify private or sensitive content in images [35,45,[49][50][51][52]57]. Starting from the premise that the objects and scene contexts present in images impact images' privacy, many of these studies used objects, scenes, and user tags, or their combination (i.e., feature-level or decision-level fusion) to infer adequate privacy classi cation for online images.…”
Section: Introductionmentioning
confidence: 99%