2022
DOI: 10.1007/s10676-022-09630-5
|View full text |Cite
|
Sign up to set email alerts
|

Trust in medical artificial intelligence: a discretionary account

Abstract: This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(7 citation statements)
references
References 43 publications
0
7
0
Order By: Relevance
“…We draw on a stream of recent influential reviews across different disciplines to guide our theorizing and choice of context and key constructs (e.g., team collaboration, human–AI teams). These reviews informed us about the determinants of human trust in AI (Glikson & Woolley, 2020), discretionary use of AI (Nickel, 2022), human–robot teams (Wolf & Stock-Homburg, 2022), human–autonomy teaming (O’Neill et al, 2022), and overall challenges and opportunities posed by the rapid emergence of AI across multiple domains (Dwivedi et al, 2021), including in the context of long-duration spaceflight (c.f., Zumbado et al, 2011). Two themes emerged based on this initial review: attitudes toward AI in general and teams’ discretion in using AI.…”
Section: Literature Reviewmentioning
confidence: 99%
“…We draw on a stream of recent influential reviews across different disciplines to guide our theorizing and choice of context and key constructs (e.g., team collaboration, human–AI teams). These reviews informed us about the determinants of human trust in AI (Glikson & Woolley, 2020), discretionary use of AI (Nickel, 2022), human–robot teams (Wolf & Stock-Homburg, 2022), human–autonomy teaming (O’Neill et al, 2022), and overall challenges and opportunities posed by the rapid emergence of AI across multiple domains (Dwivedi et al, 2021), including in the context of long-duration spaceflight (c.f., Zumbado et al, 2011). Two themes emerged based on this initial review: attitudes toward AI in general and teams’ discretion in using AI.…”
Section: Literature Reviewmentioning
confidence: 99%
“…A key aspect of human-AI collaboration is trust 29 . Some authorities have argued that the concept of trust is inappropriate in this context, either because it is conceptually confused (we can never trust AI, because AI is not capable of 'trustworthiness' in the human sense) or dangerous (we should not trust AI, because we can never be sure it is acting as intended) [30][31][32] . Regardless of semantics, trust, or at least the assessment by the human of the reliability of AI, is important to consider.…”
Section: Completely Autonomous Scan Performance and Analysismentioning
confidence: 99%
“…Call these approaches The Anthropocentric View of Trust. 5 This view has also been labeled the 'reductive view' [62], 'humans behind the machines' [24], 'indirect trust' [10], and 'human-centered terminology from philosophical accounts' [72]. They all commonly point out that the concept of trust, according to the traditional conception, is not suitable for technological artifacts [24,25].…”
Section: Anthropocentric View Of Trustmentioning
confidence: 99%
“…In some cases, there might be a normative expectation of those who hold the anthropocentric view of trust that others will subscribe to the restrictive usage of the concept. Failing to do so might lead "to a corrupted form of trust in a domain where rightful trust is of paramount importance" [62]. Often, scholars who analyze the term suggest replacing 'Trustworthy AI' with 'Reliable AI'.…”
Section: The Shift From 'Trustworthy Ai' To 'Reliable Ai' Will (Proba...mentioning
confidence: 99%