2023
DOI: 10.22541/au.168209222.21704626/v2
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Clinicians Risk Becoming "Liability Sinks" for Artificial Intelligence

Abstract: The benefits of AI in healthcare will only be realised if we considerthe whole clinical context and the AI's role in it. The current, standard model of AI-supported decision-making inhealthcare risks reducing the clinician's role to a mere 'sense check'on the AI, whilst at the same time leaving them to be held legallyaccountable for decisions made using AI. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…Identified potential risks to clinicians were grouped into three kinds: (1) risks to professional competence, which may arise from the fact that use of Dora may deprive trainees from developing their skills by making routine follow-up calls with patients; (2) risks to psychological well-being, because clinicians may see only the difficult cases and consequently could burn out quicker if Dora handles all of the easy non-complicated cases; and (3) legal risks, because the degree to which a clinician is liable for either failing to conform to Dora's recommendation when they should or conforming to Dora's recommendation when they should not, is unclear [29]. As identified in the insights section below, and picked up in the autonomy and justice stages of the case study, these are questions around which there is substantial uncertainty.…”
Section: Beneficence and Non-maleficence: Interim Results And Insightsmentioning
confidence: 99%
See 1 more Smart Citation
“…Identified potential risks to clinicians were grouped into three kinds: (1) risks to professional competence, which may arise from the fact that use of Dora may deprive trainees from developing their skills by making routine follow-up calls with patients; (2) risks to psychological well-being, because clinicians may see only the difficult cases and consequently could burn out quicker if Dora handles all of the easy non-complicated cases; and (3) legal risks, because the degree to which a clinician is liable for either failing to conform to Dora's recommendation when they should or conforming to Dora's recommendation when they should not, is unclear [29]. As identified in the insights section below, and picked up in the autonomy and justice stages of the case study, these are questions around which there is substantial uncertainty.…”
Section: Beneficence and Non-maleficence: Interim Results And Insightsmentioning
confidence: 99%
“…Questions remain around how to analyse, report, and act upon errors and potential harms from their real-world use, whilst having an agile and capable regulatory framework for addressing their responsible deployment through product life-cycles [45]. Regulators and academics [29] are responding, for example, with the Food and Drug Administration's (FDA) AI/machine learning (ML) software as a medical device action plan [12] and the Medicines & Healthcare Products Regulatory Agency's (MHRA) change programme for regulating software and AI as a medical device [33], but gaps exist in our understanding of how we maintain the trustworthiness of these ever-learning systems as we scale their adoption across new clinical settings and incorporate the latest AI models (Figure 2). Additionally, it is increasingly clear that a gap exists between legislated minimum safety requirements and ethical acceptability.…”
Section: The Regulatory Contextmentioning
confidence: 99%