The ethical and societal implications of artificial intelligence systems raise concerns. In this paper we outline a novel process based on applied ethics, namely Z-inspection®, to assess if an AI system is trustworthy. We use the definition of trustworthy AI given by the high-level European Commission's expert group on AI. Z-inspection® is a general inspection process that can be applied to a variety of domains where AI systems are used, such as business, healthcare, public sector, among many others. To the best of our knowledge, Z-inspection® is the first process to assess trustworthy AI in practice.
Artificial Intelligence (AI) has the potential to greatly improve the delivery of healthcare and other services that advance population health and wellbeing. However, the use of AI in healthcare also brings potential risks that may cause unintended harm. To guide future developments in AI, the High-Level Expert Group on AI set up by the European Commission (EC), recently published ethics guidelines for what it terms “trustworthy” AI. These guidelines are aimed at a variety of stakeholders, especially guiding practitioners toward more ethical and more robust applications of AI. In line with efforts of the EC, AI ethics scholarship focuses increasingly on converting abstract principles into actionable recommendations. However, the interpretation, relevance, and implementation of trustworthy AI depend on the domain and the context in which the AI system is used. The main contribution of this paper is to demonstrate how to use the general AI HLEG trustworthy AI guidelines in practice in the healthcare domain. To this end, we present a best practice of assessing the use of machine learning as a supportive tool to recognize cardiac arrest in emergency calls. The AI system under assessment is currently in use in the city of Copenhagen in Denmark. The assessment is accomplished by an independent team composed of philosophers, policy makers, social scientists, technical, legal, and medical experts. By leveraging an interdisciplinary team, we aim to expose the complex trade-offs and the necessity for such thorough human review when tackling socio-technical applications of AI in healthcare. For the assessment, we use a process to assess trustworthy AI, called 1Z-Inspection® to identify specific challenges and potential ethical trade-offs when we consider AI in practice.
regulatory interface. As one of us has suggested previously, 1 there are several possibilities for the creation of company structures that might provide functional and adaptive legal "housing" for advanced software, various types of artificial intelligence, and other programmatic systems and organizations-phenomena that we refer to here collectively as autonomous systems, for ease of reference. In particular, this prior work introduces the notion that an operating agreement or private entity constitution (such as a corporation's charter or a partnership's operating agreement) can adopt, as the acts of a legal entity, the state or actions of arbitrary physical systems. We call this the algorithm-agreement equivalence principle. 2 Given this principle and the present capacities existing forms of legal entities, companies of various kinds can serve as a mechanism through which autonomous systems might engage with the legal system. This paper considers the implications of this possibility from a comparative and international perspective. Our goal is to suggest how, under U.S., German, Swiss, and U.K. law, company law might furnish the functional and adaptive legal "housing" for an autonomous system-and, in turn, we aim to inform systems designers, regulators, and others who are interested in, encouraged by, or alarmed at the possibility that an autonomous system may "inhabit" a company and thereby gain some of the incidents of legal personality. We do not aim here to be normative. Instead, the paper lays out a template suggesting how existing laws might provide a potentially unexpected regulatory framework for autonomous systems, and to explore some legal consequences of this possibility. We do suggest that these considerations might spur others to consider the relevant provisions of their own national laws with a view to locating similar legal "spaces" that autonomous systems could "inhabit."
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.