2023
DOI: 10.1016/j.artint.2022.103839
|View full text |Cite
|
Sign up to set email alerts
|

Assessing the communication gap between AI models and healthcare professionals: Explainability, utility and trust in AI-driven clinical decision-making

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(11 citation statements)
references
References 45 publications
0
11
0
Order By: Relevance
“…Third, our results extend the group threat viewpoint by showing that speciesism was a boundary condition in the relationship between medical staff participation and AI anxiety. The majority of previous research haven’t explored whether social identity of AI affected user’s attitudes towards AI in the context of human-computer and human-AI interaction [21] , [60] , [84] , [85] . Meanwhile, scholar proposed that speciesism might influence the users’ attitudes of new technologies, which has not been confirmed yet [60] .…”
Section: Discussionmentioning
confidence: 99%
“…Third, our results extend the group threat viewpoint by showing that speciesism was a boundary condition in the relationship between medical staff participation and AI anxiety. The majority of previous research haven’t explored whether social identity of AI affected user’s attitudes towards AI in the context of human-computer and human-AI interaction [21] , [60] , [84] , [85] . Meanwhile, scholar proposed that speciesism might influence the users’ attitudes of new technologies, which has not been confirmed yet [60] .…”
Section: Discussionmentioning
confidence: 99%
“…A recent review of literature on AI in healthcare found that AI systems use has the potential to benefit physicians by relieving duties related to patient records and other administrative tasks, streamlining efforts of physicians through patient screening, advising patients on when to seek help, providing guidance to physicians on diagnosis and treatment options, and reducing medical error or unconscious bias [ 86 ]. Different stakeholders in healthcare systems have been shown to have diverse expectations of and opinions on challenges with the adoption of AI systems in healthcare [ 34 , 87 ].…”
Section: Methodology For Using the Ai Expectations Management Frameworkmentioning
confidence: 99%
“…This example of explainable artificial intelligence (XAI) is one in a growing area of research in the field of data science. Many methods are available for designers to employ [ [33] , [34] , [35] ], for example using visualization-based proxy models on top of black-box models can provide a visual model of the internal workings of the AI system, although they cannot be used with extremely complex models [ 36 ].…”
Section: Theoretical Foundations Of the Ai Expectations Management Fr...mentioning
confidence: 99%
“…Communication between healthcare professionals and the patient is influenced by the healthcare professional’s model of care and is central to the health and disease process [ 1 ]. The bio-psychosocial model [ 2 , 3 ] maintains that patient-centred communication is essential to achieve better health outcomes [ 4 ], including greater involvement in treatment, improved quality of life and reduced use of health services [ 5 , 6 ].…”
Section: Introductionmentioning
confidence: 99%