Building on the growing literature in algorithmic accountability, this paper investigates the use of a process visualisation technique known as the Petri net to achieve the aims of Privacy by Design. The strength of the approach is that it can help to bridge the knowledge gap that often exists between those in the legal and technical domains. Intuitive visual representations of the status of a system and the flow of information within and between legal and system models mean developers can embody the aims of the legislation from the very beginning of the software design process, while lawyers can gain an understanding of the inner workings of the software without needing to understand code. The approach can also facilitate automated formal verification of the models' interactions, paving the way for machine-assisted privacy by design and, potentially, more general 'compliance by design'. Opening up the 'black box' in this way could be a step towards achieving better algorithmic accountability.
This article introduces digisprudence, a theory about the legitimacy of software that both conceptualises regulative code’s potential illegitimacies and suggests concrete ways to ameliorate them. First it develops the notion of computational legalism – code’s ruleishness, opacity, immediacy, immutability, pervasiveness, and private production – before sketching how it is that code regulates, according to design theory and the philosophy of technology. These ideas are synthesised into a framework of digisprudential affordances, which are translations of legitimacy requirements, derived from legal philosophy, into the conceptual language of design. The ex ante focus on code’s production is pivotal, in turn suggesting a guiding ‘constitutional’ role for design processes. The article includes a case study on blockchain applications and concludes by setting out some avenues for future work.
Artificial intelligence systems have become ubiquitous in everyday life, and their potential to improve efficiency in a broad range of activities that involve finding patterns or making predictions have made them an attractive technology for the humanitarian sector. However, concerns over their intrusion on the right to privacy and their possible incompatibility with data protection principles may pose a challenge to their deployment. Furthermore, in the humanitarian sector, compliance with data protection principles is not enough, because organisations providing humanitarian assistance also need to comply with humanitarian principles to ensure the provision of impartial and neutral aid that does not harm beneficiaries in any way. In view of this, the present contribution analyses a hypothetical facial recognition system based on artificial intelligence that could assist humanitarian organisations in their efforts to identify missing persons. Recognising that such a system could create risks by providing information on missing persons that could potentially be used by harmful actors to identify and target vulnerable groups, such a system ought only to be deployed after a holistic impact assessment has been made, to ensure its adherence to both data protection and humanitarian principles.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.