Obfuscation-based private web search (OB-PWS) solutions allow users to search for information in the Internet while concealing their interests. The basic privacy mechanism in OB-PWS is the automatic generation of dummy queries that are sent to the search engine along with users' real requests. These dummy queries prevent the accurate inference of search profiles and provide query deniability. In this paper we propose an abstract model and an associated analysis framework to systematically evaluate the privacy protection offered by OB-PWS systems. We analyze six existing OB-PWS solutions using our framework and uncover vulnerabilities in their designs. Based on these results, we elicit a set of features that must be taken into account when analyzing the security of OB-PWS designs to avoid falling into the same pitfalls as previous proposals.
Online social networks (OSNs) have become one of the main communication channels in today's information society, and their emergence has raised new privacy concerns. The content uploaded to OSNs (such as pictures, status updates, comments) is by default available to the OSN provider, and often to other people to whom the user who uploaded the content did not intend to give access. A different class of concerns relates to sensitive information that can be inferred from the behavior of users. For example, the analysis of user interactions augments social network graphs with potentially privacy-sensitive details on the nature of social relations, such as the strength of user relationships. A solution to prevent such inferences is to automatically generate dummy interactions that obfuscate the real interactions between OSN users. Given an adversary that observes the obfuscated interactions, the goal is to prevent the adversary from recovering parameters of interest (e.g., relationships strength) that accurately describe the real user interactions. The design and evaluation of obfuscation strategies requires metrics that express the level of protection they would offer when deployed in a particular OSN with its underlying user interaction patterns. In this paper we propose mutual information as obfuscation metric. It measures the amount of information leaked by the (observable) obfuscated interactions in the system on the (concealed) real interactions between users. We show that the metric is suitable for comparing different obfuscation strategies, and flexible to accommodate different network topologies and user communication patterns. Obfuscation comes at the cost of network overhead, and the proposed metric contributes to enabling the optimization of strategies to achieve good levels of privacy protection at minimum overhead. We provide a detailed methodology to compute the metric and perform experiments that illustrate its suitability.
Abstract-Cryptographic access control tools for online social networks (CACTOS) allow users to enforce their privacy settings online without relying on the social network provider or any other third party. Many such tools have been proposed in the literature, some of them implemented and currently publicly available, and yet they have seen poor or no adoption at all. In this paper we investigate which obstacles may be hindering the adoption of these tools. To this end, we perform a user study to inquire users about key issues related to the desirability and general perception of CACTOS. Our results suggest that, even if social network users would be potentially interested in these tools, several issues would effectively obstruct their adoption. Participants in our study perceived that CACTOS are a disproportionate means to protect their privacy online. This in turn may have been motivated by the explicit use of cryptography or the fact that users do not actually share on social networks the type of information they would feel the need to encrypt. Moreover, in this paper we point out to several key elements that are to be considered for the improvement and better usability of CACTOS.
The separation between the public and private spheres on online social networks is known to be at best blurred. On the one hand, previous studies have shown how it is possible to infer private attributes from publicly available data. On the other hand, no distinction exists between public and private data when we consider the ability of the OSN provider to access them. Even when OSN users go to great lengths to protect their privacy, such as by using encryption or communication obfuscation, correlations between data may render these solutions useless. In this paper, we study the relationship between private communication patterns and publicly available OSN data. Such relationship informs both privacy-invasive inferences as well as OSN communication modelling, the latter being key towards developing effective obfuscation tools. We propose an inference model based on Bayesian analysis and evaluate, using a real social network dataset, how archetypal social graph features can lead to inferences about private communication. Our results indicate that both friendship graph and public traffic data may not be informative enough to enable these inferences, with time analysis having a non-negligible impact on their precision.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.