2020
DOI: 10.48550/arxiv.2002.06157
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Generalization and Representational Limits of Graph Neural Networks

Vikas K. Garg,
Stefanie Jegelka,
Tommi Jaakkola

Abstract: We address two fundamental questions about graph neural networks (GNNs). First, we prove that several important graph properties cannot be computed by GNNs that rely entirely on local information. Such GNNs include the standard message passing models, and more powerful spatial variants that exploit local graph structure (e.g., via relative orientation of messages, or local port ordering) to distinguish neighbors of each node. Our treatment includes a novel graph-theoretic formalism. Second, we provide the firs… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 13 publications
0
9
0
Order By: Relevance
“…Recently, various methods were proposed to focus on some graph characteristics, which could not be captured by GCNNs (e.g. longest circle in [12]).…”
Section: Graph Neural Networkmentioning
confidence: 99%
“…Recently, various methods were proposed to focus on some graph characteristics, which could not be captured by GCNNs (e.g. longest circle in [12]).…”
Section: Graph Neural Networkmentioning
confidence: 99%
“…In theory (Zaheer et al 2017, Theorem 2), if COMB 1,2 are given 'sufficient' hidden units, this set representation is universal. In practice, however, commutative-associative aggregation suffers from limited expressiveness (Pabbaraju and Jain 2019;Wagstaff et al 2019;Garg, Jegelka, and Jaakkola 2020;Cohen-Karlik, David, and Globerson 2020), which degrades the quality of x u and s(•, •), as described below. Specifically, their expressiveness is constrained from two perspectives.…”
Section: Gnns and Their Limitationsmentioning
confidence: 99%
“…We use similar arguments to show that GNNs have enough expressive power to solve a task on a set of small graphs and to fail on it on a set of large graphs. Several works studied generalization bounds for certain classes of GNNs [Garg et al, 2020, Puny et al, 2020, Verma and Zhang, 2019], but did not discuss size-generalization. Sinha et al [2020] proposed a benchmark for assessing the logical generalization abilities of GNNs.…”
Section: Related Workmentioning
confidence: 99%