2020
DOI: 10.1016/j.artmed.2020.101812
|View full text |Cite
|
Sign up to set email alerts
|

An incremental explanation of inference in Bayesian networks for increasing model trustworthiness and supporting clinical decision making

Abstract: Various AI models are increasingly being considered as part of clinical decision-support tools. However, the trustworthiness of such models is rarely considered. Clinicians are more likely to use a model if they can understand and trust its predictions. Key to this is if its underlying reasoning can be explained. A Bayesian network (BN) model has the advantage that it is not a black-box and its reasoning can be explained. In this paper, we propose an incremental explanation of inference that can be applied to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1
1
1

Relationship

3
7

Authors

Journals

citations
Cited by 26 publications
(15 citation statements)
references
References 25 publications
0
15
0
Order By: Relevance
“…Owing to their probabilistic semantics, which allows conditioning and interventions, researchers have looked into ways to augment directed and undirected graphical models ( Baum and Petrie, 1966 ) further to provide explanations, although, of course, they are already inherently transparent in the sense described above. Relevant works include ( Timmer et al, 2016 ), where the authors propose a way to construct explanatory arguments from Bayesian models, as well as ( Kyrimi et al, 2020 ), where explanations are produced in order to assess the trustworthiness of a model. Furthermore, ways to draw representative examples from data have been considered, such as in ( Kim et al, 2014 ).…”
Section: Transparent Modelsmentioning
confidence: 99%
“…Owing to their probabilistic semantics, which allows conditioning and interventions, researchers have looked into ways to augment directed and undirected graphical models ( Baum and Petrie, 1966 ) further to provide explanations, although, of course, they are already inherently transparent in the sense described above. Relevant works include ( Timmer et al, 2016 ), where the authors propose a way to construct explanatory arguments from Bayesian models, as well as ( Kyrimi et al, 2020 ), where explanations are produced in order to assess the trustworthiness of a model. Furthermore, ways to draw representative examples from data have been considered, such as in ( Kim et al, 2014 ).…”
Section: Transparent Modelsmentioning
confidence: 99%
“…Bayesian networks have the advantage that they can be explained logically, increasing trustworthiness of the model. [27][28][29] FaMe-Net aims to develop additional methods using Artificial Intelligence, of great help when investigating multiple relations: between morbidity (final diagnoses), comorbidity, mortality, illness presentation (RFE and duration), and contextual/personal characteristics including intoxications and ethnicity. Our expanded means to describe how morbidity is presented in family medicine form the basis of scientifically grounded family medicine.…”
Section: Discussionmentioning
confidence: 99%
“…[7] described a method to extract such arguments in their INSITE system. The method was later refined in [8] and [9]. Their approaches suggest how to measure and explain the effect of the available evidence on a target node, but it is quite limited when it comes to explain interactions between chains of reasoning.…”
Section: Previous Workmentioning
confidence: 99%