Internet of Things (IoT) devices create new ways through which personal data is collected and processed by service providers. Frequently, end users have little awareness of, and even less control over, these devices' data collection. IoT Personalized Privacy Assistants (PPAs) can help overcome this issue by helping users discover and, when available, control the data collection practices of nearby IoT resources. We use semi-structured interviews with 17 participants to explore user perceptions of three increasingly more autonomous potential implementations of PPAs, identifying benefits and issues associated with each implementation. We find that participants weigh the desire for control against the fear of cognitive overload. We recommend solutions that address users' differing automation preferences and reduce notification overload. We discuss open issues related to opting out from public data collections, automated consent, the phenomenon of user resignation, and designing PPAs with at-risk communities in mind.
We conducted an in-lab user study with 24 participants to explore the usefulness and usability of privacy choices offered by websites. Participants were asked to find and use choices related to email marketing, targeted advertising, or data deletion on a set of nine websites that differed in terms of where and how these choices were presented. They struggled with several aspects of the interaction, such as selecting the correct page from a site's navigation menu and understanding what information to include in written opt-out requests. Participants found mechanisms located in account settings pages easier to use than options contained in privacy policies, but many still consulted help pages or sent email to request assistance. Our findings indicate that, despite their prevalence, privacy choices like those examined in this study are difficult for consumers to exercise in practice. We provide design and policy recommendations for making these website opt-out and deletion choices more useful and usable for consumers.
Text passwords-a frequent vector for account compromise, yet still ubiquitous-have been studied for decades by researchers attempting to determine how to coerce users to create passwords that are hard for a ackers to guess but still easy for users to type and memorize. Most studies examine one password or a small number of passwords per user, and studies o en rely on passwords created solely for the purpose of the study or on passwords protecting low-value accounts. ese limitations severely constrain our understanding of password security in practice, including the extent and nature of password reuse, password behaviors speci c to categories of accounts (e.g., nancial websites), and the e ect of password managers and other privacy tools. In this paper we report on an in situ study of 154 participants over an average of 147 days each. Participants' computers were instrumented-with careful a ention to privacy-to record detailed information about password characteristics and usage, as well as many other computing behaviors such as use of security and privacy web browser extensions. is data allows a more accurate analysis of password characteristics and behaviors across the full range of participants' web-based accounts. Examples of our ndings are that the use of symbols and digits in passwords predicts increased likelihood of reuse, while increased password strength predicts decreased likelihood of reuse; that password reuse is more prevalent than previously believed, especially when partial reuse is taken into account; and that password managers may have no impact on password reuse or strength. We also observe that users can be grouped into a handful of behavioral clusters, representative of various password management strategies. Our ndings suggest that once a user needs to manage a larger number of passwords, they cope by partially and exactly reusing passwords across most of their accounts.
We conducted an online survey and remote usability study to explore user needs related to advertising controls on Facebook and determine how well existing controls align with these needs. Our survey results highlight a range of user objectives related to controlling Facebook ads, including being able to select what ad topics are shown or what personal information is used in ad targeting. Some objectives are achievable with Facebook's existing controls, but participants seemed to be unaware of them, suggesting issues of discoverability. In our remote usability study, participants noted areas in which the usability of Facebook's advertising controls could be improved, including the location, layout, and explanation of controls. Additionally, we found that users could be categorized into four groups based on their privacy concerns related to Facebook's data collection practices, objectives for controlling their ad experience, and willingness to engage with advertising controls. Our findings provide a set of user requirements for advertising controls, applicable to Facebook as well as other platforms, that would better align such controls with users' needs and expectations.
In this paper we describe the iterative evaluation and refinement of a consent flow for a chatbot being developed by a large U.S. health insurance company. This chatbot’s use of a cloud service provider triggers a requirement for users to agree to a HIPAA authorization. We highlight remote usability study and online survey findings indicating that simplifying the interface and language of the consent flow can improve the user experience and help users who read the content understand how their data may be used. However, we observe that most users in our studies, even those using our improved consent flows, missed important information in the authorization until we asked them to review it again. We also show that many people are overconfident about the privacy and security of healthcare data and that many people believe HIPAA protects in far more contexts than it actually does. Given that our redesigns following best practices did not produce many meaningful improvements in informed consent, we argue for the need for research on alternate approaches to health data disclosures such as standardized disclosures; methods borrowed from clinical research contexts such as multimedia formats, quizzes, and conversational approaches; and automated privacy assistants.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.