Privacy policies analysis relies on understanding sentences meaning in order to identify sentences of interest to privacy related applications. In this paper, the authors investigate the strengths and limitations of sentence embeddings to detect dangerous permissions in Android apps privacy policies. Sent2Vec sentence embedding model was utilized and trained on 130,000 Android apps privacy policies. The terminology extracted by the sentence embedding model was then compared with the gold standard on a dataset of 564 privacy policies. This work seeks to provide answers to researchers and developers interested in extracting privacy related information from privacy policies using sentence embedding models. In addition, it may help regulators interested in deploying sentence embedding models to check for privacy policies' compliance with the government regulations and to identify points of inconsistencies or violations.
Google requires Android apps which handle users' personal data such as photos and contacts information to post a privacy policy which describes comprehensively how the app collects, uses and shares users' information. Unfortunately, while knowing why the app wants to access specific users' information is considered very useful, permissions screen in Android does not provide such pieces of information. Accordingly, users reported their concerns about apps requiring permissions that seem to be not related to the apps' functions. To advance toward practical solutions that can assist users in protecting their privacy, a technique to automatically discover the rationales of dangerous permissions requested by Android apps, by extracting them from apps' privacy policies, could be a great advantage. However, before being able to do so, it is important to bridge the gap between technical terms used in Android permissions and natural language terminology in privacy policies. In this paper, we recorded the terminology used in Android apps' privacy policies which describe usage of dangerous permissions. The semiautomated approach employs NLP and IE techniques to map privacy policies' terminologies to Android dangerous permissions. The mapping links 128 information types to Android dangerous permissions. This mapping produces semantic information which can then be used to extract the rationales of dangerous permissions from apps' privacy policies.
With the rapid growth of mobile applications and the increase number of mobile users, many cloud storage services started to design stand-alone mobile applications that can be used to access files remotely from mobile and share them. In this paper, an in-depth analysis has been performed on the privacy policies of zero knowledge cloud storage applications on mobile. The analysis includes the type of information collected, collection mechanisms, purpose for collection, sharing of information, user controls and information retention period. The results showed that most privacy policies addressed important areas of information collection, purposes of collection, information sharing and users' controls. On the other hand, many policies lacked detailed information of the data retention period.
Today, Android users are faced with several permissions' screens asking to access their personal information when using Android apps. In fact, Android users have to balance several considerations when choosing to grant or deny these data collection activities. Hence, it is important to understand how users' decisions are made and what factors play a role in users' decisions. A number of studies on the permissions' screens of Android devices have reported users discomfort and misunderstanding of the permission system. However, most studies were carried out on the old permission system where all permissions are presented at installation time, and the user has to either accept all the permissions or stop the installation. With the new permission system started with Android version 6.0 and higher, permissions are presented differently at run time. In this work, we aim to study users' disclosure decisions with the new run time system on Android. We have modeled users' disclosure decisions from three perspectives: dangerous permission type, clarity of rationale, and clarity of context. The study has been conducted on Amazon Mechanical Turk. The results show that dangerous permission type as well as clarity of the context have a statistical significant effect on users' disclosure decisions. On the other hand, clarity of dangerous permission's rationale does not contribute significantly to users' decisions. These findings shed light upon important factors that users consider in making privacy decisions in the new Android run time model. Such factors should be taken into account by Android apps developers when requesting access to users' private information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.