2022
DOI: 10.1007/s43681-022-00135-x
|View full text |Cite
|
Sign up to set email alerts
|

Diffused responsibility: attributions of responsibility in the use of AI-driven clinical decision support systems

Abstract: Good decision-making is a complex endeavor, and particularly so in a health context. The possibilities for day-to-day clinical practice opened up by AI-driven clinical decision support systems (AI-CDSS) give rise to fundamental questions around responsibility. In causal, moral and legal terms the application of AI-CDSS is challenging existing attributions of responsibility. In this context, responsibility gaps are often identified as main problem. Mapping out the changing dynamics and levels of attributing res… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
17
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
2

Relationship

2
8

Authors

Journals

citations
Cited by 52 publications
(19 citation statements)
references
References 55 publications
1
17
0
1
Order By: Relevance
“…Rationalizations are mental strategies that allow healthcare workers to justify their behavior and reduce their concerns in a process of diffusion of responsibility. This diffusion of responsibility will exacerbate this lack of concern once artificial intelligence becomes more deeply embedded in the healthcare setting [138]. Diffusion of responsibility shields the role of the healthcare facility and its management.…”
Section: Discussionmentioning
confidence: 99%
“…Rationalizations are mental strategies that allow healthcare workers to justify their behavior and reduce their concerns in a process of diffusion of responsibility. This diffusion of responsibility will exacerbate this lack of concern once artificial intelligence becomes more deeply embedded in the healthcare setting [138]. Diffusion of responsibility shields the role of the healthcare facility and its management.…”
Section: Discussionmentioning
confidence: 99%
“…At this point, it is important to return to the first point: the use and application of automated systems in medical contexts take place in specific contexts for which there are already established rules and procedures. 37 For example, if an automated system is used for a specific part of medical decision-making, there are already institutionally established procedures for this decisionmaking, harm mitigation bodies, 10 rules for liability in decisionmaking and, last but not least, a medical ethics framework. 38 The crucial point here is that the existing institutional framework settings already define the minimum requirements for control that different actors such as clinicians, patients, caregivers, relatives and others can claim as entitlements and rights that have already been conceded.…”
Section: Extended Essaymentioning
confidence: 99%
“…The shift from an (individual-) rights-oriented governance to a risk-based AI governance has direct consequences on how far specific needs of persons with incapacities can be considered or not, for instance. The moment ethical principles such as vulnerability (Bleher & Braun, 2022 ; Braun, 2020 ) or justice (Braun & Hummel, 2022 ; Braun et al 2021 ) are justified as normatively central, the evaluation of primarily risk-based governance of AI systems also changes.…”
Section: A Meta-framework For Applied Ai Ethics Approachesmentioning
confidence: 99%