Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &Amp; Data Mining 2021
DOI: 10.1145/3447548.3467154
|View full text |Cite
|
Sign up to set email alerts
|

Counterfactual Graphs for Explainable Classification of Brain Networks

Abstract: Training graph classifiers able to distinguish between healthy brains and dysfunctional ones, can help identifying substructures associated to specific cognitive phenotypes. However, the mere predictive power of the graph classifier is of limited interest to the neuroscientists, which have plenty of tools for the diagnosis of specific mental disorders. What matters is the interpretation of the model, as it can provide novel insights and new hypotheses.In this paper we propose counterfactual graphs as a way to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 26 publications
(17 citation statements)
references
References 40 publications
0
17
0
Order By: Relevance
“…For example, due to the characteristics of graph data, traditional explanation methods for deep learning (e.g., input optimisation methods [137] and soft mask learning methods [138]) cannot be directly applied to GNNs because of the irregularity and discrete topology of graphs. When conducting evaluations, domain knowledge (e.g., brain connectomics [127]) is sometimes necessary to validate GNN explanations.…”
Section: B Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…For example, due to the characteristics of graph data, traditional explanation methods for deep learning (e.g., input optimisation methods [137] and soft mask learning methods [138]) cannot be directly applied to GNNs because of the irregularity and discrete topology of graphs. When conducting evaluations, domain knowledge (e.g., brain connectomics [127]) is sometimes necessary to validate GNN explanations.…”
Section: B Methodsmentioning
confidence: 99%
“…Consequently, building trustworthy GNNs requires insights into why GNNs make particular predictions, which has driven an increase in research into the interpretability and explainability of GNNs. These abilities enable researchers to capture causality in GNNs [126] or insights for further investigation in applications [127], foster the implementation of robust GNN systems by developers [128], and guide regulators to ensure the fairness of GNNs [129].…”
Section: Explainability Of Gnnsmentioning
confidence: 99%
See 2 more Smart Citations
“…In this paper, instead, we consider a specific type of graph classification that arises in many application domains, in which a specific node id corresponds to the same entity in all the input networks: for instance, in classification of brain networks, the same node id represents the same brain region in all the input graphs. We call this setting graph classification with node identity awareness [15,20,1]. While not every application domain requires node identity awareness, it is crucial to exploit these property whenever it occurs, as ignoring it represents an important loss of information.…”
Section: Overview Of Contributions and Roadmapmentioning
confidence: 99%