2021
DOI: 10.48550/arxiv.2106.06210
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning to Pool in Graph Neural Networks for Extrapolation

Jihoon Ko,
Taehyung Kwon,
Kijung Shin
et al.

Abstract: Graph neural networks (GNNs) are one of the most popular approaches to using deep learning on graph-structured data, and they have shown state-of-the-art performances on a variety of tasks. However, according to a recent study, a careful choice of pooling functions, which are used for the aggregation or readout operation in GNNs, is crucial for enabling GNNs to extrapolate. Without the ideal combination of pooling functions, which varies across tasks, GNNs completely fail to generalize to out-of-distribution d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 16 publications
0
4
0
Order By: Relevance
“…For ROTP B-E and ROTP B-Q , we make the α 0 learnable. The baselines we considered include i) classic pooling operations like Add-Pooling, Mean-Pooling, and Max-Pooling; ii) the mixed pooling operations like the Mixed Mean-Max and the Gated Mean-Max in [10]; iii) the learnable global pooling layers like DeepSet [15], Set2Set [14], DynamicPooling [1], GNP [28], and the Attention-Pooling and Gated Attention in [2]; iv) the attention-pooling methods for graphs, i.e., SAGPooling [33], AS-APooling [32]; and v) OT-based pooling methods, i.e., OTK [59], WEGL [63], and SWE [62]. The above pooling methods are trained and tested on a server with two Nvidia RTX3090 GPUs, whose key hyperparameters are set by grid search.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…For ROTP B-E and ROTP B-Q , we make the α 0 learnable. The baselines we considered include i) classic pooling operations like Add-Pooling, Mean-Pooling, and Max-Pooling; ii) the mixed pooling operations like the Mixed Mean-Max and the Gated Mean-Max in [10]; iii) the learnable global pooling layers like DeepSet [15], Set2Set [14], DynamicPooling [1], GNP [28], and the Attention-Pooling and Gated Attention in [2]; iv) the attention-pooling methods for graphs, i.e., SAGPooling [33], AS-APooling [32]; and v) OT-based pooling methods, i.e., OTK [59], WEGL [63], and SWE [62]. The above pooling methods are trained and tested on a server with two Nvidia RTX3090 GPUs, whose key hyperparameters are set by grid search.…”
Section: Methodsmentioning
confidence: 99%
“…In [10], the mixed mean-max pooling and its structured variants leverage mixture models of mean-pooling and max-pooling to improve pooling results. Recently, a generalized norm-based pooling (GNP) is proposed in [28], [29]. It can imitate max-pooling and mean-pooling under different settings.…”
Section: Pooling Operationsmentioning
confidence: 99%
See 2 more Smart Citations