Privacy by Design has emerged as a proactive approach for embedding privacy into the early stages of the design of information and communication technologies, but it is no 'silver bullet'. Challenges involved in engineering Privacy by Design include a lack of holistic and systematic methodologies that address the complexity and variability of privacy issues and support the translation of its principles into engineering activities. A consequence is that its principles are given at a high level of abstraction without accompanying tools and guidelines to address these challenges. We analyse three privacy requirements engineering methods from which we derive a set of criteria that aid in identifying data-processing activities that may lead to privacy violations and harms and also aid in specifying appropriate design decisions. We also present principles for engineering Privacy by Design that can be developed upon these criteria. Based on these, we outline some preliminary thoughts on the form of a principled framework that addresses the plurality and contextuality of privacy issues and supports the translation of the principles of Privacy by Design into engineering activities.
It is well understood that processing personal data without effective data management models may lead to privacy violations. Such concerns have motivated the development of privacy-aware practices and systems, as well as legal frameworks and standards. However, there is a disconnect between policy-makers and software engineers with respect to the meaning of privacy. In addition, it is challenging: to establish that a system underlying business processes complies with its privacy requirements; to provide technical assurances; and to meet data subjects' expectations. We propose an abstract personal data lifecycle (APDL) model to support the management and traceability of personal data. The APDL model represents data-processing activities in a way that is amenable to analysis. As well as facilitating the identification of potentially harmful data-processing activities, it has the potential to demonstrate compliance with legal frameworks and standards.
Concerns over data-processing activities that may lead to privacy violations or harms have motivated the development of legal frameworks and standards to govern the processing of personal data. However, it is widely recognised that there is a disconnect between policymakers' intentions and software engineering reality. The Abstract Personal Data Lifecycle (APDL) model, which was proposed to serve as an abstract model for personal data life-cycles, distinguishes between the main operations that can be performed on personal data during its lifecycle by outlining the various distinct activities for each operation. We show how the APDL can be represented in terms of the Unified Modeling Language (UML). The profile is illustrated via a realistic case study.
It is increasingly recognised that Privacy Impact Assessments (PIAs) play a crucial role in providing privacy protection for data subjects and in supporting risk management for organisations. However, existing PIA processes are typically not accompanied with proper guidelines and/or methodologies that sufficiently support privacy risk assessments and illustrate precisely how the core part of the PIA-a risk assessment-can be conducted. We present an approach for assessing potential privacy risks built upon a privacy risk model that considers legal, organisational, societal and technical aspects. This approach has the potential to underpin a systematic and traceable privacy risk-assessment methodology that can complement PIA processes.
Purpose Concerns over data-processing activities that may lead to privacy violations or harms have motivated the development of legal frameworks and standards. Further, software engineers are increasingly expected to develop and maintain privacy-aware systems that both comply with such frameworks and standards and meet reasonable expectations of privacy. This paper aims to facilitate reasoning about privacy compliance, from legal frameworks and standards, with a view to providing necessary technical assurances. Design/methodology/approach The authors show how the standard extension mechanisms of the UML meta-model might be used to specify and represent data-processing activities in a way that is amenable to privacy compliance checking and assurance. Findings The authors demonstrate the usefulness and applicability of the extension mechanisms in specifying key aspects of privacy principles as assumptions and requirements, as well as in providing criteria for the evaluation of these aspects to assess whether the model meets these requirements. Originality/value First, the authors show how key aspects of abstract privacy principles can be modelled using stereotypes and tagged values as privacy assumptions and requirements. Second, the authors show how compliance with these principles can be assured via constraints that establish rules for the evaluation of these requirements.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.