Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence 2020
DOI: 10.24963/ijcai.2020/181
|View full text |Cite
|
Sign up to set email alerts
|

When Do GNNs Work: Understanding and Improving Neighborhood Aggregation

Abstract: Graph Neural Networks (GNNs) have been shown to be powerful in a wide range of graph-related tasks. While there exists various GNN models, a critical common ingredient is neighborhood aggregation, where the embedding of each node is updated by referring to the embedding of its neighbors. This paper aims to provide a better understanding of this mechanisms by asking the following question: Is neighborhood aggregation always necessary and beneficial? In short, the answer is no. We carve out two conditio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
30
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 52 publications
(30 citation statements)
references
References 1 publication
0
30
0
Order By: Relevance
“…Convolutional neural networks (CNNs) applied on 2D/3D images defined on standard Euclidean grids (Deng et al, 2009;LeCun et al, 1989) are designed using 2D/3D rectangular convolutional kernels that slide across the images and map F in input feature maps to F out output feature maps. An extension of this application on data types in irregular domains such as graphs, is typically expressed using neighborhood aggregation (Corso et al, 2020;Xie et al, 2020) or message passing (Gilmer et al, 2017) schemes.…”
Section: Spiral Convolutionmentioning
confidence: 99%
“…Convolutional neural networks (CNNs) applied on 2D/3D images defined on standard Euclidean grids (Deng et al, 2009;LeCun et al, 1989) are designed using 2D/3D rectangular convolutional kernels that slide across the images and map F in input feature maps to F out output feature maps. An extension of this application on data types in irregular domains such as graphs, is typically expressed using neighborhood aggregation (Corso et al, 2020;Xie et al, 2020) or message passing (Gilmer et al, 2017) schemes.…”
Section: Spiral Convolutionmentioning
confidence: 99%
“…A recent work [23] offers an important insight into the neighborhood aggregation of GNNs, i.e., the neighborhood aggregation is not always necessary and beneficial. Following this insight, we aim to deepen our understanding of the vulnerability of GNNs from an empirical study perspective.…”
Section: Related Workmentioning
confidence: 99%
“…There are several works in studying the expressive power of GNNs [10,17,23,25], including its depth, width, representational properties and limitations. Specifically, most of GNNs follow a neighborhood aggregation scheme, where the hidden Fig.…”
Section: Message Passing Graph Neural Networkmentioning
confidence: 99%
See 2 more Smart Citations