2021
DOI: 10.1186/s12911-021-01542-6
|View full text |Cite
|
Sign up to set email alerts
|

Examining the effect of explanation on satisfaction and trust in AI diagnostic systems

Abstract: Background Artificial Intelligence has the potential to revolutionize healthcare, and it is increasingly being deployed to support and assist medical diagnosis. One potential application of AI is as the first point of contact for patients, replacing initial diagnoses prior to sending a patient to a specialist, allowing health care professionals to focus on more challenging and critical aspects of treatment. But for AI systems to succeed in this role, it will not be enough for them to merely pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(9 citation statements)
references
References 48 publications
0
9
0
Order By: Relevance
“…Self-report measures as a measure of trust are especially problematic, as they are well known to be affected by biases such as the perceived desirability of answering in a certain manner and are inconsistent between subjects, over-relying on the subject's perception of themselves (Paulhus et al, 2007). Trust in AI models has been studied previously for classification tasks such as disease diagnosis (Anton et al, 2022;Alam & Mueller, 2021). This is, however, not applicable to AI models that make predictions on a continuous scale.…”
Section: Related Workmentioning
confidence: 99%
“…Self-report measures as a measure of trust are especially problematic, as they are well known to be affected by biases such as the perceived desirability of answering in a certain manner and are inconsistent between subjects, over-relying on the subject's perception of themselves (Paulhus et al, 2007). Trust in AI models has been studied previously for classification tasks such as disease diagnosis (Anton et al, 2022;Alam & Mueller, 2021). This is, however, not applicable to AI models that make predictions on a continuous scale.…”
Section: Related Workmentioning
confidence: 99%
“…For instance, Zhou et al [13] showed that the explanation of influences of training data points on predictions significantly increased the user trust in predictions. Alam and Mueller [14] investigated the roles of explanations in AI-informed decision-making in medical diagnosis scenarios. The results show that visual and example-based explanations integrated with rationales had a significantly better impact on patient satisfaction and trust than no explanations, or with text-based rationales alone.…”
Section: Ai Explanationmentioning
confidence: 99%
“…Furthermore, with the advancement of AI explanation research, different explanation approaches such as local and global explanations, as well as feature importance-based and example-based explanations are proposed [6]. As a result, besides the explanation presentation styles such as visualisation and text [14,15], it is also critical to understand how different explanation approaches affect user trust in AI-informed decision-making. In addition, Edwards [16] stated that the main challenge for AI-informed decision-making is to know whether an explanation that seems valid is accurate.…”
Section: Ai Explanationmentioning
confidence: 99%
“…Consequently, PT answers the question "How trustworthy do I think the system is?". PT is another latent construct that is not directly observable but that can be measured indirectly as is commonly done in research assessing people's perceptions of system trustworthiness (e.g., by asking people to report on their perceived trustworthiness of a system [3,97], by observing people's interactions with systems [104,117]). PT is a trustor's assessment of the AT of the system based on a cognitive and affective evaluation of the system [6,70,77].…”
Section: Perceived Trustworthiness (Pt)mentioning
confidence: 99%