Abstract-Permission systems are the main defense that mobile platforms, such as Android and iOS, offer to users to protect their private data from prying apps. However, due to the tension between usability and control, such systems have several limitations that often force users to overshare sensitive data. We address some of these limitations with SmarPer, an advanced permission mechanism for Android. To address the rigidity of current permission systems and their poor matching of users' privacy preferences, SmarPer relies on contextual information and machine learning methods to predict permission decisions at runtime. Note that the goal of SmarPer is to mimic the users' decisions, not to make privacy-preserving decisions per se. Using our SmarPer implementation, we collected 8,521 runtime permission decisions from 41 participants in real conditions. With this unique data set, we show that using an efficient Bayesian linear regression model results in a mean correct classification rate of 80% (±3%). This represents a mean relative reduction of approximately 50% in the number of incorrect decisions when compared with a user-defined static permission policy, i.e., the model used in current permission systems. SmarPer also focuses on the suboptimal trade-off between privacy and utility; instead of only "allow" or "deny" type of decisions, SmarPer also offers an "obfuscate" option where users can still obtain utility by revealing partial information to apps. We implemented obfuscation techniques in SmarPer for different data types and evaluated them during our data collection campaign. Our results show that 73% of the participants found obfuscation useful and it accounted for almost a third of the total number of decisions. In short, we are the first to show, using a large dataset of real in situ permission decisions, that it is possible to learn users' unique decision patterns at runtime using contextual information while supporting data obfuscation; this is an important step towards automating the management of permissions in smartphones.
Location check-ins contain both geographical and semantic information about the visited venues. Semantic information is usually represented by means of tags (e.g., "restaurant"). Such data can reveal some personal information about users beyond what they actually expect to disclose, hence their privacy is threatened. To mitigate such threats, several privacy protection techniques based on location generalization have been proposed. Although the privacy implications of such techniques have been extensively studied, the utility implications are mostly unknown. In this paper, we propose a predictive model for quantifying the effect of a privacy-preserving technique (i.e., generalization) on the perceived utility of check-ins. We first study the users' motivations behind their location check-ins, based on a study targeted at Foursquare users (N = 77). We propose a machine-learning method for determining the motivation behind each check-in, and we design a motivation-based predictive model for the utility implications of generalization. Based on the survey data, our results show that the model accurately predicts the fine-grained motivation behind a check-in in 43% of the cases and in 63% of the cases for the coarse-grained motivation. It also predicts, with a mean error of 0.52 (on a scale from 1 to 5), the loss of utility caused by semantic and geographical generalization. This model makes it possible to design of utility-aware, privacy-enhancing mechanisms in location-based online social networks. It also enables service providers to implement location-sharing mechanisms that preserve both the utility and privacy for their users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.