Personalized medicine uses fine grained information on individual persons, to pinpoint deviations from the normal. ‘Digital Twins’ in engineering provide a conceptual framework to analyze these emerging data-driven health care practices, as well as their conceptual and ethical implications for therapy, preventative care and human enhancement. Digital Twins stand for a specific engineering paradigm, where individual physical artifacts are paired with digital models that dynamically reflects the status of those artifacts. When applied to persons, Digital Twins are an emerging technology that builds on in silico representations of an individual that dynamically reflect molecular status, physiological status and life style over time. We use Digital Twins as the hypothesis that one would be in the possession of very detailed bio-physical and lifestyle information of a person over time. This perspective redefines the concept of ‘normality’ or ‘health,’ as a set of patterns that are regular for a particular individual, against the backdrop of patterns observed in the population. This perspective also will impact what is considered therapy and what is enhancement, as can be illustrated with the cases of the ‘asymptomatic ill’ and life extension via anti-aging medicine. These changes are the consequence of how meaning is derived, in case measurement data is available. Moral distinctions namely may be based on patterns found in these data and the meanings that are grafted on these patterns. Ethical and societal implications of Digital Twins are explored. Digital Twins imply a data-driven approach to health care. This approach has the potential to deliver significant societal benefits, and can function as a social equalizer, by allowing for effective equalizing enhancement interventions. It can as well though be a driver for inequality, given the fact that a Digital Twin might not be an accessible technology for everyone, and given the fact that patterns identified across a population of Digital Twins can lead to segmentation and discrimination. This duality calls for governance as this emerging technology matures, including measures that ensure transparency of data usage and derived benefits, and data privacy.
Debates on lethal autonomous weapon systems have proliferated in the past 5 years. Ethical concerns have been voiced about a possible raise in the number of wrongs and crimes in military operations and about the creation of a "responsibility gap" for harms caused by these systems. To address these concerns, the principle of "meaningful human control" has been introduced in the legal-political debate; according to this principle, humans not computers and their algorithms should ultimately remain in control of, and thus morally responsible for, relevant decisions about (lethal) military operations. However, policy-makers and technical designers lack a detailed theory of what "meaningful human control" exactly means. In this paper, we lay the foundation of a philosophical account of meaningful human control, based on the concept of "guidance control" as elaborated in the philosophical debate on free will and moral responsibility. Following the ideals of "Responsible Innovation" and "Value-sensitive Design," our account of meaningful human control is cast in the form of design requirements. We identify two general necessary conditions to be satisfied for an autonomous system to remain under meaningful human control: first, a "tracking" condition, according to which the system should be able to respond to both the relevant moral reasons of the humans designing and deploying the system and the relevant facts in the environment in which the system operates; second, a "tracing" condition, according to which the system should be designed in such a way as to grant the possibility to always trace back the outcome of its operations to at least one human along the chain of design and operation. As we think that meaningful human control can be one of the central notions in ethics of robotics and AI, in the last part of the paper, we start exploring the implications of our account for the design and use of non-military autonomous systems, for instance, self-driving cars. Keywords: meaningful human control, autonomous weapon systems, responsibility gap, ethics of robotics, responsible innovation in robotics, value-sensitive design in robotics, ai ethics, ethics of autonomous systems Santoni de Sio and van den Hoven Meaningful Human Control over Autonomous Systems
The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected problems – gaps in culpability, moral and public accountability, active responsibility—caused by different sources, some technical, other organisational, legal, ethical, and societal. Responsibility gaps may also happen with non-learning systems. The paper clarifies which aspect of AI may cause which gap in which form of responsibility, and why each of these gaps matter. It proposes a critical review of partial and non-satisfactory attempts to address the responsibility gap: those which present it as a new and intractable problem (“fatalism”), those which dismiss it as a false problem (“deflationism”), and those which reduce it to only one of its dimensions or sources and/or present it as a problem that can be solved by simply introducing new technical and/or legal tools (“solutionism”). The paper also outlines a more comprehensive approach to address the responsibility gaps with AI in their entirety, based on the idea of designing socio-technical systems for “meaningful human control", that is systems aligned with the relevant human reasons and capacities.
In this paper, in line with the general framework of value-sensitive design, we aim to operationalize the general concept of “Meaningful Human Control” (MHC) in order to pave the way for its translation into more specific design requirements. In particular, we focus on the operationalization of the first of the two conditions (Santoni de Sio and Van den Hoven 2018) investigated: the so-called ‘tracking’ condition. Our investigation is led in relation to one specific subcase of automated system: dual-mode driving systems (e.g. Tesla ‘autopilot’). First, we connect and compare meaningful human control with a concept of control very popular in engineering and traffic psychology (Michon 1985), and we explain to what extent tracking resembles and differs from it. This will help clarifying the extent to which the idea of meaningful human control is connected to, but also goes beyond, current notions of control in engineering and psychology. Second, we take the systematic analysis of practical reasoning as traditionally presented in the philosophy of human action (Anscombe, Bratman, Mele) and we adapt it to offer a general framework where different types of reasons and agents are identified according to their relation to an automated system’s behaviour. This framework is meant to help explaining what reasons and what agents (should) play a role in controlling a given system, thereby enabling policy makers to produce usable guidelines and engineers to design systems that properly respond to selected human reasons. In the final part, we discuss a practical example of how our framework could be employed in designing automated driving systems.
This theoretical paper draws the scientific community’s attention to how pharmacological cognitive enhancement may impact on society and law. Namely, if safe, reliable, and effective techniques to enhance mental performance are eventually developed, then this may under some circumstances impose new duties onto people in high-responsibility professions—e.g., surgeons or pilots—to use such substances to minimize risks of adverse outcomes or to increase the likelihood of good outcomes. By discussing this topic, we also hope to encourage scientists to bring their expertise to bear on this current public debate.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.