BACKGROUND
Artificial intelligence-based Clinical Decision Support Systems (AI-CDSS) have offered personalized medicine and improved healthcare efficiency to healthcare workers. Despite opportunities, trust in these tools remains a critical factor for their successful integration. Existing research lacks synthesized insights and actionable recommendations for providing healthcare workers' trust in AI-CDSS.
OBJECTIVE
The study aims to identify and synthesize factors for guiding in designing systems that foster healthcare worker trust in AI-CDSS.
METHODS
We performed a systematic review of published studies from January 2020 to November 2024 that were retrieved from PubMed, Scopus, and Google Scholar, focusing on healthcare workers’ perceptions, experiences, and trust in AI-CDSS. Two independent reviewers utilized the Cochrane Collaboration Handbook and PRISMA 2020 guidelines to develop a data charter and synthesize the study data. The CASP tool was applied to assess the quality of the studies included and evaluate the risk of bias, ensuring a rigorous and systematic review process.
RESULTS
The review included 27 studies that met the inclusion criteria, across diverse healthcare workers predominantly in hospitalized settings. Qualitative methods dominated (n=16,59%), with sample sizes ranging from small focus groups to over 1,000 participants. Seven key themes were identified: System Transparency, Training and Familiarity, System Usability, Clinical Reliability, Credibility and Validation, Ethical Considerations, and Customization and Control through enablers and barriers that impact healthcare workers’ trust in AI-based CDSS.
CONCLUSIONS
From seven thematic areas, enablers such as transparency, training, usability, and clinical reliability, while barriers include algorithmic opacity and ethical concerns. Recommendations emphasize the explainability of AI models, comprehensive training, stakeholder involvement, and human-centered design for healthcare worker trust in AI-CDSS.