Demand is growing for more accountability regarding the technological systems that increasingly occupy our world. However, the complexity of many of these systems -often systems-of-systems -poses accountability challenges. A key reason for this is because the details and nature of the information flows that interconnect and drive systems, which often occur across technical and organisational boundaries, tend to be invisible or opaque. This paper argues that data provenance methods show much promise as a technical means for increasing the transparency of these interconnected systems. Specifically, given the concerns regarding ever-increasing levels of automated and algorithmic decision-making, and so-called 'algorithmic systems' in general, we propose decision provenance as a concept showing much promise. Decision provenance entails using provenance methods to provide information exposing decision pipelines: chains of inputs to, the nature of, and the flow-on effects from the decisions and actions taken (at design and run-time) throughout systems. This paper introduces the concept of decision provenance, and takes an interdisciplinary (tech-legal) exploration into its potential for assisting accountability in algorithmic systems. We argue that decision provenance can help facilitate oversight, audit, compliance, risk mitigation, and user empowerment, and we also indicate the implementation considerations and areas for research necessary for realising its vision. More generally, we make the case that considerations of data flow, and systems more broadly, are important to discussions of accountability, and complement the considerable attention already given to algorithmic specifics.
on the protection of natural persons with regard to the processing of personal data and on the free movement of personal data, and repealing Directive 95/46/EC (General Data Protection Regulation) (2016) OJ L119/1. Art 4(1) GDPR defines personal data. Heleen Janssen et al. Á Decentralized data processing
XR (Virtual, Augmented and Mixed Reality) technologies are growing in prominence. However, they are increasingly being used in sectors and in situations that can result in harms. As such, this paper argues the need for auditability to become a key consideration of XR systems. Auditability entails capturing information of a system's operation to enable oversight, inspection or investigation. Things can and will go wrong, and information that helps unpack situations of failure or harm, and that enables accountability and recourse, will be crucial to XR's adoption and acceptance. In drawing attention to the urgent need for auditability, we illustrate some risks associated with XR technology and their audit implications, and present some initial findings from a survey with developers indicating the current 'haphazard' approach towards such concerns. We also highlight some challenges and considerations of XR audit in practice, as well as areas of future work for taking this important area of research forward. CCS CONCEPTS • Human-centered computing → Virtual reality; Mixed / augmented reality; • General and reference → Verification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.