This paper focuses on the use of ‘black box’ AI in medicine and asks whether the physician needs to disclose to patients that even the best AI comes with the risks of cyberattacks, systematic bias, and a particular type of mismatch between AI’s implicit assumptions and an individual patient’s background situation. Pace current clinical practice, I argue that, under certain circumstances, these risks do need to be disclosed. Otherwise, the physician either vitiates a patient’s informed consent or violates a more general obligation to warn him about potentially harmful consequences. To support this view, I argue, first, that the already widely accepted conditions in the evaluation of risks, i.e. the ‘nature’ and ‘likelihood’ of risks, speak in favour of disclosure and, second, that principled objections against the disclosure of these risks do not withstand scrutiny. Moreover, I also explain that these risks are exacerbated by pandemics like the COVID-19 crisis, which further emphasises their significance.
Artificial intelligence (AI) increasingly executes tasks that previously only humans could do, such as drive a car, fight in war, or perform a medical operation. However, as the very best AI systems tend to be the least controllable and the least transparent, some scholars argued that humans can no longer be morally responsible for some of the AI-caused outcomes, which would then result in a responsibility gap. In this paper, I assume, for the sake of argument, that at least some of the most sophisticated AI systems do indeed create responsibility gaps, and I ask whether we can bridge these gaps at will, viz. whether certain people could take responsibility for AI-caused harm simply by performing a certain speech act, just as people can give permission for something simply by performing the act of consent. So understood, taking responsibility would be a genuine normative power. I first discuss and reject the view of Champagne and Tonkens, who advocate a view of taking liability. According to this view, a military commander can and must, ahead of time, accept liability to blame and punishment for any harm caused by autonomous weapon systems under her command. I then defend my own proposal of taking answerability, viz. the view that people can makes themselves morally answerable for the harm caused by AI systems, not only ahead of time but also when harm has already been caused.
This paper focuses on voluntary consent in the context of living organ donation. Arguing against three dominant views, I claim that voluntariness must not be equated with willingness, that voluntariness does not require the exercise of relational moral agency, and that, in cases of third-party pressure, voluntariness critically depends on the role of the surgeon and the medical team, and not just on the pressure from other people. I therefore argue that an adequate account of voluntary consent cannot understand voluntariness as a purely psychological concept, that it has to be consistent with people pursuing various different conceptions of the good and that it needs to make the interaction between the person giving consent and the person (or people) receiving consent central to its approach.
The permissibility of nudging in public policy is often assessed in terms of the conditions of transparency, rationality, and easy resistibility. This debate has produced important resources for any ethical inquiry into nudging, but it has also failed to focus sufficiently on a different yet very important question, namely: when do nudges undermine a patient’s voluntary consent to a medical procedure? In this paper, I take on this further question and, more precisely, I ask to which extent the three conditions of transparency, rationality, and easy resistibility can be applied to the assessment of voluntary consent too. After presenting two examples, designed to put pressure on these three conditions, I show that, suitably modified, the three conditions can remain significant in the assessment of voluntary consent as well. However, the needed modifications are very substantial and result in a rather complicated view. To propose a tidier solution, I argue that nudging undermines voluntary consent if and only if it cannot be ‘interpersonally justified’ to the patient. I use the three modified conditions to motivate the idea of interpersonal justification and also to further specify the principles it involves. My resulting view is especially attractive because it builds on already existing insights from the debate on nudging, updates those insights with an eye to medical consent, and finally unites them in an elegant and simple framework.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.