Machine Learning and Artificial Intelligence (AI) more broadly have great immediate and future potential for transforming almost all aspects of medicine. However, in many applications, even outside medicine, a lack of transparency in AI applications has become increasingly problematic. This is particularly pronounced where users need to interpret the output of AI systems. Explainable AI (XAI) provides a rationale that allows users to understand why a system has produced a given output. The output can then be interpreted within a given context. One area that is in great need of XAI is that of Clinical Decision Support Systems (CDSSs). These systems support medical practitioners in their clinic decision-making and in the absence of explainability may lead to issues of under or over-reliance. Providing explanations for how recommendations are arrived at will allow practitioners to make more nuanced, and in some cases, life-saving decisions. The need for XAI in CDSS, and the medical field in general, is amplified by the need for ethical and fair decision-making and the fact that AI trained with historical data can be a reinforcement agent of historical actions and biases that should be uncovered. We performed a systematic literature review of work to-date in the application of XAI in CDSS. Tabular data processing XAI-enabled systems are the most common, while XAI-enabled CDSS for text analysis are the least common in literature. There is more interest in developers for the provision of local explanations, while there was almost a balance between post-hoc and ante-hoc explanations, as well as between model-specific and model-agnostic techniques. Studies reported benefits of the use of XAI such as the fact that it could enhance decision confidence for clinicians, or generate the hypothesis about causality, which ultimately leads to increased trustworthiness and acceptability of the system and potential for its incorporation in the clinical workflow. However, we found an overall distinct lack of application of XAI in the context of CDSS and, in particular, a lack of user studies exploring the needs of clinicians. We propose some guidelines for the implementation of XAI in CDSS and explore some opportunities, challenges, and future research needs.
Gestational Diabetes Mellitus (GDM), a common pregnancy complication associated with many maternal and neonatal consequences, is increased in mothers with overweight and obesity. Interventions initiated early in pregnancy can reduce the rate of GDM in these women, however, untargeted interventions can be costly and time-consuming. We have developed an explainable machine learning-based clinical decision support system (CDSS) to identify at-risk women in need of targeted pregnancy intervention. Maternal characteristics and blood biomarkers at baseline from the PEARS study were used. After appropriate data preparation, synthetic minority oversampling technique and feature selection, five machine learning algorithms were applied with five-fold cross-validated grid search optimising the balanced accuracy. Our models were explained with Shapley additive explanations to increase the trustworthiness and acceptability of the system. We developed multiple models for different use cases: theoretical (AUC-PR 0.485, AUC-ROC 0.792), GDM screening during a normal antenatal visit (AUC-PR 0.208, AUC-ROC 0.659), and remote GDM risk assessment (AUC-PR 0.199, AUC-ROC 0.656). Our models have been implemented as a web server that is publicly available for academic use. Our explainable CDSS demonstrates the potential to assist clinicians in screening at risk patients who may benefit from early pregnancy GDM prevention strategies.
In this paper, we focus on the problem of highway merge via parallel-type on-ramp for autonomous vehicles (AVs) in a decentralized non-cooperative way. This problem is challenging because of the highly dynamic and complex road environments. A deep reinforcement learning-based approach is proposed. The kernel of this approach is a Deep Q-Network (DQN) that takes dynamic traffic state as input and outputs actions including longitudinal acceleration (or deceleration) and lane merge. The total reward for this on-ramp merge problem consists of three parts, which are the merge success reward, the merge safety reward, and the merge efficiency reward. For model training and testing, we construct a highway on-ramp merging simulation experiments with realistic driving parameters. The experimental results show that the proposed approach can make reasonable merging decisions based on the observation of the traffic environment. We also compare our approach with a state-of-the-art approach and the superior performance of our approach is demonstrated by making challenging merging decisions in complex highway parallel-type on-ramp merging scenarios.
In this study, a deep reinforcement learning approach is proposed to handle tactical driving in complex highway traffic environments for unmanned ground vehicles. Tactical driving is a challenging topic for unmanned ground vehicles because of its interplay with routing decisions as well as real-time traffic dynamics. The core of our deep reinforcement learning approach is a deep Q-network that takes dynamic traffic information as input and outputs typical tactical driving decisions as action. The reward is designed with the consideration of successful highway exit, average traveling speed, and driving safety and comfort. In order to endow an unmanned ground vehicle with situational traffic information that is critical for tactical driving, the vehicle’s sensor information such as vehicle position and velocity are further augmented through the assessment of the ego-vehicle’s collision risk, potential field, and kinematics and used as input for the deep Q-network model. A convolutional neural network is built and fine-tuned to extract traffic features which facilitate the decision-making process of Q-learning. For model training and testing, a highway simulation platform is constructed with realistic parameter settings obtained from a real-world highway traffic dataset. The performance of the deep Q-network model is validated with extensive simulation experiments under different parameter settings such as traffic density and risk level. The results exhibit the important potentials of our deep Q-network model in learning challenging tactical driving decisions given multiple objectives and complex traffic environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.