2021
DOI: 10.48550/arxiv.2111.10657
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Generalizing Graph Neural Networks on Out-Of-Distribution Graphs

Abstract: Graph Neural Networks (GNNs) are proposed without considering the agnostic distribution shifts between training graphs and testing graphs, inducing the degeneration of the generalization ability of GNNs on Out-Of-Distribution (OOD) settings. The fundamental reason for such degeneration is that most GNNs are developed based on the I.I.D hypothesis. In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation. This … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(13 citation statements)
references
References 37 publications
0
13
0
Order By: Relevance
“…However, in practice, it might be challenging to satisfy this ideal hypothesis. Recent research (Fan et al 2021;) studies how well GNNs generalize outside the training distribution. Several studies concentrate on size generalization ability to make GNNs function effectively on testing graphs whose size distribution is different from that of training graphs.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…However, in practice, it might be challenging to satisfy this ideal hypothesis. Recent research (Fan et al 2021;) studies how well GNNs generalize outside the training distribution. Several studies concentrate on size generalization ability to make GNNs function effectively on testing graphs whose size distribution is different from that of training graphs.…”
Section: Related Workmentioning
confidence: 99%
“…We suggest that not all correlations should be eliminated, in contrast to prior techniques (Fan et al 2021;, which aggressively decorrelate all connections across graph representations. Such an aggressive objective may result in an issue with an overly-reduced sample size (Martino, Elvira Llorente et al 2022), which hampers the generalization ability of GNNs.…”
Section: Introductionmentioning
confidence: 97%
See 2 more Smart Citations
“…This study ran PAC-ML in an environment which had the same load rate, β distribution, cluster network size, and job computation graphs at train and test time. An interesting research question would be whether PAC-ML would be able to learn on one set (or a distribution) of these parameters and then generalise to a new set at test time, or if it would need to leverage existing or new state-of-the-art methods in GNN (Knyazev et al, 2019;Garg et al, 2020;Fan et al, 2021) and RL (Cobbe et al, 2019;Wang et al, 2020;Kirk et al, 2021) generalisation.…”
Section: Conclusion and Further Workmentioning
confidence: 99%