2021
DOI: 10.1136/medethics-2020-107095
|View full text |Cite
|
Sign up to set email alerts
|

Before and beyond trust: reliance in medical AI

Abstract: Artificial intelligence (AI) is changing healthcare and the practice of medicine as data-driven science and machine-learning technologies, in particular, are contributing to a variety of medical and clinical tasks. Such advancements have also raised many questions, especially about public trust. As a response to these concerns there has been a concentrated effort from public bodies, policy-makers and technology companies leading the way in AI to address what is identified as a "public trust deficit". This pape… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
21
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 45 publications
(29 citation statements)
references
References 46 publications
0
21
0
Order By: Relevance
“…The ethics of AI-driven digital pathology Trustworthiness has been shown to be earned in a number of ways in medical contexts, for instance, through consent processes, professional integrity, and community engagement [35]. In the context of AI, it is argued that trustworthiness is earned by developing algorithms that are lawful, ethical, and robust and which respect principles of human autonomy, prevention of harm, fairness, and explicability (the last of which includes ideals of transparency and explainability) [45,46]. All of these norms will no doubt continue to be important to the public acceptability of AI.…”
mentioning
confidence: 99%
“…The ethics of AI-driven digital pathology Trustworthiness has been shown to be earned in a number of ways in medical contexts, for instance, through consent processes, professional integrity, and community engagement [35]. In the context of AI, it is argued that trustworthiness is earned by developing algorithms that are lawful, ethical, and robust and which respect principles of human autonomy, prevention of harm, fairness, and explicability (the last of which includes ideals of transparency and explainability) [45,46]. All of these norms will no doubt continue to be important to the public acceptability of AI.…”
mentioning
confidence: 99%
“…guaranteed. Although Graham fails to clearly articulate the distinction between assured reliance, or confidence, and mere reliance [ 13 ], he does outline criteria for developing confidence-worthy systems for data sharing (here we substitute AI systems), arguing that they would involve: meaningful (i.e. understandable) transparency arrangements that can be checked, clear mechanisms of accountability, and assurances that the data involved are representative (i.e.…”
Section: Discussionmentioning
confidence: 99%
“…are working for the public good). Graham [ 30 ] argues that confidence, unlike trust, is not dependent upon A’s recognition of B’s good will [ 2 , 15 ], but, like reliance [ 13 ], is an enforceable obligation. Consequently, if AI development does not meet the conditions outlined above, for example, if it fails to meet the transparency requirement for confidence, then AI developers will be sanctioned.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, in practice g(H, y) is not the only metric to be optimized. Even assuming we care solely about economic value, profits, while correlated with (F1 score) performance, turn also on variables like user adoption and trust [28,9,15]. A perfectly accurate classifier that is never used generates no revenue.…”
Section: Implications On ML Pipelinesmentioning
confidence: 99%