“…Moreover, people do not read extensive privacy policies. If people took the time to read through privacy policies, it is estimated that they would have to spend 244 hr per year (Custers et al., 2018:253). Furthermore, the GDPR has shortcomings as it sets out to adequately protect personal data by defining personal data solely with reference to a data subject.…”
This article uses cases from disaster management as a springboard for presenting a critique of a right to group privacy in a strong sense. As such, the article challenges the idea of strong group privacy, which holds that there are situations in which the group, and not its members, is the holder of a right to privacy. The paper argues for a moderate interpretation of group privacy, stressing that group privacy is a matter of privacy for the members constituting the group. Although data‐driven knowledge discovery implies profiling by group categorization, this observation does not constitute a reason to introduce a right to group privacy for other purposes than to protect the individual's right to privacy. The article demonstrates preliminary theoretical considerations, which may inform the creation of a framework that protects personal privacy by considering a moderate sense of group privacy suited to tackle privacy challenges implied by data analytics.
“…Moreover, people do not read extensive privacy policies. If people took the time to read through privacy policies, it is estimated that they would have to spend 244 hr per year (Custers et al., 2018:253). Furthermore, the GDPR has shortcomings as it sets out to adequately protect personal data by defining personal data solely with reference to a data subject.…”
This article uses cases from disaster management as a springboard for presenting a critique of a right to group privacy in a strong sense. As such, the article challenges the idea of strong group privacy, which holds that there are situations in which the group, and not its members, is the holder of a right to privacy. The paper argues for a moderate interpretation of group privacy, stressing that group privacy is a matter of privacy for the members constituting the group. Although data‐driven knowledge discovery implies profiling by group categorization, this observation does not constitute a reason to introduce a right to group privacy for other purposes than to protect the individual's right to privacy. The article demonstrates preliminary theoretical considerations, which may inform the creation of a framework that protects personal privacy by considering a moderate sense of group privacy suited to tackle privacy challenges implied by data analytics.
“…32), which indicate the DS’s wishes by which they, by a statement or by clear affirmative action, signify agreement to the processing of their PI, i.e. there must be no uncertainty about the intent of the DS (Custers et al , 2018).…”
Section: Problem Statement and Research Questionsmentioning
confidence: 99%
“…But whether DSs are always capable of making these choices and willing to do so in practice is questionable (Schermer et al , 2014). That is why most privacy laws and regulations (especially the GDPR) have brought to light the concept of “informed consent” (Kurteva et al , 2020), stating that consent cannot be valid if it is not informed (Custers et al , 2018), i.e. the DS who is asked for consent should be properly informed of what exactly she is consenting to and (made) aware to some extent of the consequences that such consent may have.…”
Section: Informational Self-determination Through Notice and Consent:...mentioning
confidence: 99%
“…The challenges of providing usable privacy notice have been recognized as an open challenge, and suggestions to improve the informed consent process are scattered over the literature. For instance, one of the most suggested improvements is reducing information overload in the notice (Mcdonald and Cranor, 2008; Custers et al , 2018). However, several studies showed that such a solution did not improve comprehension much.…”
Section: Problem Statement and Research Questionsmentioning
confidence: 99%
“…In principle, this can benefit both sides because individuals can enjoy access to a variety of online services, news sites, e-mail, social networking, videos, music, etc., without explicitly paying with money (Karwatzki et al , 2018). However, many individuals may not know that they are paying with their personal information (PI) as their behavior is being tracked and their PI is being collected and sold (Custers et al , 2018), just because they blindly accepted the privacy policies or terms of services offered by these websites. This leads us to the biggest lie on the internet I have read and agree to the terms and conditions (Obar and Oeldorf-Hirsch, 2020).…”
Purpose
Most developed countries have enacted privacy laws to govern the collection and use of personal information (PI) as a response to the increased misuse of PI. Yet, these laws rely heavily on the concept of informational self-determination through the “notice” and “consent” models, which is deeply flawed. This study aims at tackling these flaws achieve the full potential of these privacy laws.
Design/methodology/approach
The author critically reviews the concept of informational self-determination through the “notice” and “consent” model identifying its main flaws and how they can be tackled.
Findings
Existing approaches present interesting ideas and useful techniques that focus on tackling some specific problems of informational self-determination but fail short in proposing a comprehensive solution that tackles the essence of the overall problem.
Originality/value
This study introduces a model for informed consent, a proposed architecture that aims at empowering individuals (data subjects) to take an active role in the protection of their PI by simplifying the informed consent transaction without reducing its effectiveness, and an ontology that can partially realize the proposed architecture.
Terms of use of a digital service are often framed in a binary way: Either one agrees to the service provider's data processing practices, and is granted access to the service, or one does not, and is denied the service. Many scholars have lamented these 'take-it-or-leave-it' situations, as this goes against the ideals of data protection law. To address this inadequacy, computer scientists and legal scholars have tried to come up with approaches to enable more privacy-friendly products and services. In this article, we call for a right to customize the processing of user data. Our arguments build upon technology-driven approaches as well as on the ideals of privacy by design and the now codified data protection by design and default norm within the General Data Protection Regulation. In addition, we draw upon the right to repair that is propagated to empower consumers and enable a more circular economy. We propose two technologically-oriented approaches, termed 'variants' and 'alternatives' that could enable the technical implementation of a right to customization. We posit that these approaches cannot be demanded without limitation, and that restrictions will depend on how reasonable a customization demand is.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.