This paper brings together a multidisciplinary perspective from systems engineering, ethics, and law to articulate a common language in which to reason about the multi-faceted problem of assuring the safety of autonomous systems. The paper's focus is on the "gaps" that arise across the development process: the semantic gap, where normal conditions for a complete specification of intended functionality are not present; the responsibility gap, where normal conditions for holding human actors morally responsible for harm are not present; and the liability gap, where normal conditions for securing compensation to victims of harm are not present. By categorising these "gaps" we can expose with greater precision key sources of uncertainty and risk with autonomous systems. This can inform the development of more detailed models of safety assurance and contribute to more effective risk control.
Policymakers and researchers consistently call for greater human accountability for AI technologies. We should be clear about two distinct features of accountability.Across the AI ethics and global policy landscape, there is consensus that there should be human accountability for AI technologies 1 . These machines are used for high-stakes decision-making in complex domains -for example, in healthcare, criminal justice and transport -where they can cause or occasion serious harm. Some use deep machine learning models, which can make their outputs difficult to understand or contest. At the same time, when the datasets on which these models are trained reflect bias against specific demographic groups, the bias becomes encoded and causes disparate impacts 2-4 . Meanwhile, an increasing number of machines that embody AI, and specifically machine learning, such as highly automated vehicles, can execute decision-making functions and take actions independently of direct, real-time human control, in unpredictable conditions that call for adaptive performance. This development can make human agency seem obscure. Considering these problems, a heterogeneous group of researchers and organizations have called for stronger, more explicit regulation and guidelines to ensure accountability for AI and autonomous systems 1,[5][6][7] .But what do we mean by 'accountability', and do we all mean the same thing? Accountability comes in different forms and varieties across rich and overlapping strands of academic literature in the humanities, law and social sciences. Scholars in the AI ethics field have recently proposed systematic conceptualizations of accountability to address this complexity [8][9][10][11] . Several researchers in the field 8,10 take explicit inspiration from Bovens's influential analysis of accountability as a social relation, in which he describes accountability as: "a relationship between an actor and a forum, in which the actor has an obligation to explain and to justify his or her conduct, the forum can pose questions and pass judgement, and the actor may face consequences" 12 .A welcome development within the AI ethics landscape would be greater conceptual clarity on the distinction between the 'explaining' and 'facing the consequences' features of accountability, as well as the relation between them.This matters ethically, legally and politically, as these two core features of accountability -that is, giving an explanation, and facing the consequences -can come apart and pull in different directions. We highlight them because, as the quotation illustrates, they represent a central bifurcation of the concept of accountability 12,13 . In addition, their relation is particularly complex when it comes to AI technologies.
With the Government's rhetoric on the Big Society it is now time to review a number of legal doctrines from the perspective of the phenomenon of volunteering. Volunteers contribute significantly to the United Kingdom's GDP, between 2-3% according to the European Union. 1 As reported by a Department for Communities and Local Government survey, a significant proportion of adults in the United Kingdom volunteer. 2 A volunteering industry has developed. 3 The Government as part of its Big Society project has encouraged volunteering organisations to provide services which were traditionally delivered by paid employees of the state or local authorities. 4 Voluntary organisations also compete amongst themselves, and with commercial concerns for contracts to deliver services on a commercial basis, although the service itself may be delivered by unpaid volunteers. For example, a volunteer staffed Legal Advice Centre may hold a contract with the Legal Services Commission and use paid employees and volunteers working alongside one another to fulfil the contract, or a first aid organisation such as St John Ambulance may contract to provide first aid coverage to a commercial event, but the staff provided will be
COX v Ministry of Justice [2016] UKSC 10; [2016] 2 W.L.R. 806, and Mohamud v Wm Morrison Supermarkets plc. [2016] UKSC 11; [2016] 2 W.L.R. 821 expand the reach of vicarious liability. In Various Claimants v Institute of the Brothers of the Christian Schools [2012] UKSC 56; [2013] 2 A.C. 1 (“CCWS”), Lord Phillips had declared vicarious liability “is on the move”. Lord Reed stated in Cox “it has not yet come to a stop”.
The benefits of AI in healthcare will only be realised if we considerthe whole clinical context and the AI's role in it. The current, standard model of AI-supported decision-making inhealthcare risks reducing the clinician's role to a mere 'sense check'on the AI, whilst at the same time leaving them to be held legallyaccountable for decisions made using AI. This model means that clinicians risk becoming "liability sinks",unfairly absorbing liability for the consequences of an AI'srecommendation without having sufficient understanding or practicalcontrol over how those recommendations were reached. Furthermore, this could have an impact on the "second victim"experience of clinicians. It also means that clinicians are less able to do what they are bestat, specifically exercising sensitivity to patient preferences in ashared clinician-patient decision-making process. There are alternatives to this model that can have a more positiveimpact on clinicians and patients alike.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.