eXplainable Artificial Intelligence (XAI) aims to provide intelligible explanations to users. XAI algorithms such as SHAP, LIME and Scoped Rules compute feature importance for machine learning predictions. Although XAI has attracted much research attention, applying XAI techniques in healthcare to inform clinical decision making is challenging. In this paper, we provide a comparison of explanations given by XAI methods as a tertiary extension in analysing complex Electronic Health Records (EHRs). With a large-scale EHR dataset, we compare features of EHRs in terms of their prediction importance estimated by XAI models. Our experimental results show that the studied XAI methods circumstantially generate different top features; their aberrations in shared feature importance merit further exploration from domain-experts to evaluate human trust towards XAI.
The recent explosion of demand for Explainable AI (XAI) techniques has encouraged the development of various algorithms such as the Local Interpretable Model-Agnostic Explanations (LIME) and the SHapley Additive exPlanations ones (SHAP). Although these algorithms have been widely discussed by the AI community, their applications to wider domains are rare, potentially due to the lack of easy-to-use tools built around these methods. In this paper, we present ExMed, a tool that enables XAI data analytics for domain experts without requiring explicit programming skills. In particular, it supports data analytics with multiple feature attribution algorithms for explaining machine learning classifications and regressions. We illustrate its domain of applications on two real world medical case studies, with the first one analysing COVID-19 control measure effectiveness and the second one estimating lung cancer patient life expectancy from the artificial Simulacrum health dataset. We conclude that ExMed can provide researchers and domain experts with a tool that both concatenates flexibility and transferability of medical sub-domains and reveal deep insights from data.
Local explanations aim to provide transparency for individual instances and their associated predictions. The need for local explanations is prominent for high-risk domains such as finance, law and health care. We propose a new modelagnostic framework for local explanations "Polynomial Adaptive Local Explanations (PALE)", to combat the lack of transparency of predictions through adaptive local models. We aim to explore explanations of predictions by assessing the impact of instantaneous rate of change in each feature and the association with the resulting prediction of the local model. PALE optimises a complex black-box model and the local explanation models for each instance, providing two forms of explanations, one provided by a localised derivative of an adapting polynomial, thus emphasising instance specificity, and the latter a core interpretable logistic regression model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.