2021
DOI: 10.1016/j.inffus.2021.01.008
|View full text |Cite
|
Sign up to set email alerts
|

Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
113
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
9
1

Relationship

1
9

Authors

Journals

citations
Cited by 268 publications
(126 citation statements)
references
References 39 publications
2
113
0
Order By: Relevance
“…As stated in [55], explainable methods are becoming more relevant, particularly in the health-care domain. Thus, it is necessary to consider many aspects when designing explainable ML methods, e.g., who is the domain expert?, who are the affected users?, among others [56,57]. Accordingly, Table . 12 Attention mechanism visualization for the OMT classification task Motive Sample text (closest English translation) A (she needs understanding and turns to someone who listens to and understands her; she feels safe and accepted and tells what is burdensome; she is accepted as she is.)…”
Section: Results Analysismentioning
confidence: 93%
“…As stated in [55], explainable methods are becoming more relevant, particularly in the health-care domain. Thus, it is necessary to consider many aspects when designing explainable ML methods, e.g., who is the domain expert?, who are the affected users?, among others [56,57]. Accordingly, Table . 12 Attention mechanism visualization for the OMT classification task Motive Sample text (closest English translation) A (she needs understanding and turns to someone who listens to and understands her; she feels safe and accepted and tells what is burdensome; she is accepted as she is.)…”
Section: Results Analysismentioning
confidence: 93%
“…Another way to improve interpretability is the use of graph-based models, as introduced in Section 4.5. As mentioned earlier, GNNs have advantages in multi-omics integrated analysis and intrinsically allow for more explainability [159]. Various recent studies are reporting the merit of graph-oriented models.…”
Section: Discussionmentioning
confidence: 99%
“…These very versatile approaches also work on graphbased data. 9 Other methods include deconvolution, which involves reversing the effects of convolution and generating from two functions a third function that is then the product of both, as well as guided backpropagation. 10 All of these methods constitute an excellent preprocessing step.…”
Section: Explainabilitymentioning
confidence: 99%