Various AI models are increasingly being considered as part of clinical decision-support tools. However, the trustworthiness of such models is rarely considered. Clinicians are more likely to use a model if they can understand and trust its predictions. Key to this is if its underlying reasoning can be explained. A Bayesian network (BN) model has the advantage that it is not a black-box and its reasoning can be explained. In this paper, we propose an incremental explanation of inference that can be applied to 'hybrid' BNs, i.e. those that contain both discrete and continuous nodes. The key questions that we answer are: (1) which important evidence supports or contradicts the prediction, and (2) through which intermediate variables does the information flow. The explanation is illustrated using a real clinical case study. A small evaluation study is also conducted. This paper considers Bayesian Networks (BNs): directed acyclic graphs, showing causal or influential relationships between random variables. These variables can be discrete or continuous, and a BN with both is called `hybrid'. The uncertain relationships between connected variables are expressed using conditional probabilities. The strength of these relationships is captured in the Node Probability Table (NPT), used to represent the conditional probability distribution of each node in the BN given its parents. Once values for all NPTs are given, the BN is fully parameterized, and probabilistic reasoning (using Bayesian inference) can be performed. However, the reasoning process is not always easy for a user to follow [6], [7], [8].In contrast to many CDS models, a BN is not a black box and its reasoning can be explained [6], [9]. Several approaches have been proposed to explain the reasoning of a BN (presented in Section 3). However, there are many situations where these methods cannot be applied. First, most of the methods are applied to BNs that include only discrete variables. Some of them are even restricted to binary variables only. However, most of the medical BNs include continuous nodes as well. In addition, most of them try to find the best explanation that can be time-consuming, especially for large BNs, which are common in medical applications. Finally, in some methods, user input is required in different stages of the explanation. This can be problematic, especially in situations where there is time pressure.In this paper, we propose a practical method of explaining the reasoning in a BN, so that a user can understand how a prediction is generated. The method is an extension of a previous conference paper published by the authors [10]. Our proposed method can be used in hybrid networks that have both continuous and discrete nodes and requires no user input. In addition, we simplify the process of identifying the most important evidence and chains of reasoning, so we can rapidly produce a good and concise explanation, but not necessarily the most complete one. In fact, our method produces an incremental explanation that has three successive l...