Proceedings of the 2022 SIAM International Conference on Data Mining (SDM) 2022
DOI: 10.1137/1.9781611977172.20
|View full text |Cite
|
Sign up to set email alerts
|

Neural Graph Matching for Pre-training Graph Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
354
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 141 publications
(356 citation statements)
references
References 0 publications
2
354
0
Order By: Relevance
“…In the first method, edge weights are added to the features of the nodes (Hu et al, 2020a;Li et al, 2020). In the second approach, edge weights are treated the same way as node features in the Aggregation function of the GNNs (Gilmer et al, 2017;Xu et al, 2019;Kipf & Welling, 2017;Hu et al, 2020b).…”
Section: Discussionmentioning
confidence: 99%
“…In the first method, edge weights are added to the features of the nodes (Hu et al, 2020a;Li et al, 2020). In the second approach, edge weights are treated the same way as node features in the Aggregation function of the GNNs (Gilmer et al, 2017;Xu et al, 2019;Kipf & Welling, 2017;Hu et al, 2020b).…”
Section: Discussionmentioning
confidence: 99%
“…• Mutagenicity [50], [51] has 4, 377 molecule graphs, where each graph is labeled with one of two labels: mutagenic and non-mutagenic, based on the mutagenic effect on a bacterium. We trained a GIN model [23], [52] to perform the binary classification. • REDDIT-MULTI-5K [53] has 4, 999 social networks labeled with five different classes to indicate the topics of question/answer communities.…”
Section: Dataset Descriptionmentioning
confidence: 99%
“…However, if there are multi-dimensional edge features, the SpMM/GEMM formulation no longer holds. For example, GIN [20] with edge embeddings is formulated as below:…”
Section: Limitationsmentioning
confidence: 99%
“…All of these models also use global average pooling, and an output head with a single linear layer. For PNA, we use 4 layers with an node embedding dimension of 80, global average pooling, and an MLP-ReLU head with sizes (40,20,1). For DGN, we use 4 layers and a node embedding dimension of 100, global average pooling, and an MLP-ReLU head with sizes (50, 25, 1).…”
Section: Model and Implementation Detailsmentioning
confidence: 99%