2021
DOI: 10.1007/s11017-021-09553-0
|View full text |Cite
|
Sign up to set email alerts
|

What is morally at stake when using algorithms to make medical diagnoses? Expanding the discussion beyond risks and harms

Abstract: In this paper, we examine the qualitative moral impact of machine learning-based clinical decision support systems in the process of medical diagnosis. To date, discussions about machine learning in this context have focused on problems that can be measured and assessed quantitatively, such as by estimating the extent of potential harm or calculating incurred risks. We maintain that such discussions neglect the qualitative moral impact of these technologies. Drawing on the philosophical approaches of technomor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(16 citation statements)
references
References 47 publications
0
14
0
Order By: Relevance
“…There remains uncertainty regarding the use of AI tools in healthcare from a legal, ethical, and regulatory standpoint. If it is inevitable that an AI tool will make a mistake, then a key question is, who is legally responsible―the pathologist, the tool itself, or the implementing trust? 65,66 There is no precedent for this and clarification from the medicolegal community is warranted, encouraging prospective legislative action, rather than reactive (and likely damaging) legal action 67 . There are also numerous ethical challenges with the use of AI.…”
Section: Discussionmentioning
confidence: 99%
“…There remains uncertainty regarding the use of AI tools in healthcare from a legal, ethical, and regulatory standpoint. If it is inevitable that an AI tool will make a mistake, then a key question is, who is legally responsible―the pathologist, the tool itself, or the implementing trust? 65,66 There is no precedent for this and clarification from the medicolegal community is warranted, encouraging prospective legislative action, rather than reactive (and likely damaging) legal action 67 . There are also numerous ethical challenges with the use of AI.…”
Section: Discussionmentioning
confidence: 99%
“…At the centre of the ethical discourse are worries and concerns linked to the development, design, and deployment of artificial intelligence systems, which we refer to as ethical risks. This focus has been, however, challenged because it neglects the circumstance that AI systems not only pose challenges to moral goods like values, rights, or duties but also mediate how we conceptualise such moral goods [7].…”
Section: A the Ethical Discussion On Aimentioning
confidence: 99%
“…Domain experts have been observed to hastily rely on AI agent judgments even when instructed to think critically about each judgment [81]. AI may be trusted too readily because of its (at least seeming) objectivity; AI health algorithms replace subjective and fallible human judgments with objective ones based on rigorous data-or so it seems [27]. If su ciently accurate and powerful, AI has the potential to substitute trust for certainty [82].…”
Section: Trustmentioning
confidence: 99%
“…and accuracy (or performance, power, etc.). Though not universally seen to really be in con ict, the dilemma between transparency/interpretability and accuracy was far and away the one most discussed by papers in the review [18, 27,30,34,72,78,96,98]. There is some evidence that, despite the seeming importance of transparency, stakeholders actually care more about effectiveness than transparency.…”
Section: Transparencymentioning
confidence: 99%
See 1 more Smart Citation