We developed an explainable artificial intelligence (AI) early warning score (xAI-EWS) system for early detection of acute critical illness. While maintaining a high predictive performance, our system explains to the clinician on which relevant electronic health records (EHRs) data the prediction is grounded. Acute critical illness is often preceded by deterioration of routinely measured clinical parameters, e.g., blood pressure and heart rate. Early clinical prediction is typically based on manually calculated screening metrics that simply weigh these parameters, such as Early Warning Scores (EWS). The predictive performance of EWSs yields a tradeoff between sensitivity and specificity that can lead to negative outcomes for the patient. [1][2][3] Previous work on EHR-trained AI systems offers promising results with high levels of predictive performance in relation to the early, real-time prediction of acute critical illness. [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21] However, without insight into the complex decisions by such system, clinical translation is hindered. In this paper, we present our xAI-EWS system, which potentiates clinical translation by accompanying a prediction with information on the EHR data explaining it.Artificial Intelligence (AI) is capable of predicting acute critical illness earlier and with greater accuracy than traditional Early Warning Score (MEWS) systems, such as modified EWSs (MEWSs) and Sequential Organ Failure Assessment Scores (SOFAs). 4, 8, 9, 11-13, 15, 17-19, 22-26 Unfortunately, standard deep learning (DL) that comprise available AI models are black-boxes and their predictions cannot readily be explained to clinicians. The importance of explainable and transparent DL algorithms in clinical medicine is without question and was recently highlighted in the Nature Medicine review by Topol, E. J. 27,28 Transparency and explainability are an absolute necessity for the widespread introduction of AI models into clinical practice, where an incorrect prediction can have grave consequences. [28][29][30][31] Clinicians must be able to understand the underlying reasoning of AI models so they can trust the predictions and be able to identify individual cases in which an AI model potentially gives incorrect predictions. [28][29][30]32 In this paper, we will present xAI-EWS, which comprises a robust and accurate AI model for predicting acute critical illness from Electronic Health Records (EHRs). Importantly, xAI-EWS was designed to provide explanations for the given predictions. To demonstrate the general clinical relevance of the xAI-EWS, we present results here from three emergency medicine cases: sepsis, Acute Kidney Injury (AKI), and Acute Lung Injury (ALI). The xAI-EWS is composed of a temporal convolutional network (TCN) [33][34][35] prediction module and a deep Taylor decomposition (DTD) 36-40 explanation module, tailored to temporal explanations (see figure 1). The architecture of the TCN has proven to be particularly effective at predicting events that ha...