2020
DOI: 10.48550/arxiv.2011.15069
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Graph convolutions that can finally model local structure

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(30 citation statements)
references
References 0 publications
0
24
0
Order By: Relevance
“…Although our method and DGN [4] all require eigendecomposition in the preprocessing step, non-spatial GN significantly outperforms DGN on ZINC and MolPCBA. GIN [75] 509,549 0.526˘0.051 GraphSage [23] 505,341 0.398˘0.002 GAT [66] 531,345 0.384˘0.007 GCN [35] 505,079 0.367˘0.011 GatedGCN-PE [8] 505,011 0.214˘0.006 MPNN (sum) [22] 480,805 0.145˘0.007 PNA [16] 387,155 0.142˘0.010 DGN [4] -0.168˘0.003 GSN [7] 523,201 0.101˘0.010 GT [19] 588,929 0.226˘0.014 SAN [39] 508, 577 0.139˘0.006 GraphormerSLIM [79] GCN [35] 0.56M 20.20˘0.24 GIN [75] 1.92M 22.66˘0.28 GCN-VN [35] 2.02M 24.24˘0.34 GIN-VN [75] 3.37M 27.03˘0.23 GCN-VN+FLAG [38] 2.02M 24.83˘0.37 GIN-VN+FLAG [38] 3.37M 28.34˘0.38 DeeperG-VN+FLAG [43] 5.55M 28.42˘0.43 PNA [16] 6.55M 28.38˘0.35 DGN [4] 6.73M 28.85˘0.30 GINE-VN [10] 6.15M 29.17˘0.15 GINE-APPNP [10] 6.15M 29.79˘0.30 PHC-GNN [40] 1.69M 29.47˘0.26…”
Section: Resultsmentioning
confidence: 99%
“…Although our method and DGN [4] all require eigendecomposition in the preprocessing step, non-spatial GN significantly outperforms DGN on ZINC and MolPCBA. GIN [75] 509,549 0.526˘0.051 GraphSage [23] 505,341 0.398˘0.002 GAT [66] 531,345 0.384˘0.007 GCN [35] 505,079 0.367˘0.011 GatedGCN-PE [8] 505,011 0.214˘0.006 MPNN (sum) [22] 480,805 0.145˘0.007 PNA [16] 387,155 0.142˘0.010 DGN [4] -0.168˘0.003 GSN [7] 523,201 0.101˘0.010 GT [19] 588,929 0.226˘0.014 SAN [39] 508, 577 0.139˘0.006 GraphormerSLIM [79] GCN [35] 0.56M 20.20˘0.24 GIN [75] 1.92M 22.66˘0.28 GCN-VN [35] 2.02M 24.24˘0.34 GIN-VN [75] 3.37M 27.03˘0.23 GCN-VN+FLAG [38] 2.02M 24.83˘0.37 GIN-VN+FLAG [38] 3.37M 28.34˘0.38 DeeperG-VN+FLAG [43] 5.55M 28.42˘0.43 PNA [16] 6.55M 28.38˘0.35 DGN [4] 6.73M 28.85˘0.30 GINE-VN [10] 6.15M 29.17˘0.15 GINE-APPNP [10] 6.15M 29.79˘0.30 PHC-GNN [40] 1.69M 29.47˘0.26…”
Section: Resultsmentioning
confidence: 99%
“…They achieve the state-of-the-art valid and test mean absolute error (MAE) on the official leaderboard 4 [20]. In addition, we compare to GIN's multi-hop variant [5], and 12-layer deep graph network DeeperGCN [29], which also show promising performance on other leaderboards. We further compare our Graphormer with the recent Transformer-based graph model GT [13].…”
Section: Ogb Large-scale Challengementioning
confidence: 93%
“…All models are trained on 8 NVIDIA V100 GPUS for about 2 days. [5,15] 13.2M 0.1248 0.1430 -DeeperGCN-VN [29,15] 25.5M 0.1059 0.1398 -GT [13] 0.6M 0.0944 0.1400 -GT-Wide [13] 83 [13] employs a hidden dimension of 64 to reduce the total number of parameters. For a fair comparison, we also report the result by enlarging the hidden dimension to 768, denoted by GT-Wide, which leads to a total number of parameters of 83.2M.…”
Section: Ogb Large-scale Challengementioning
confidence: 99%
See 1 more Smart Citation
“…OGB. We use GNNs achieving top places on the OGB graph classification leaderboard 3 (at the time of submission) as the baselines, including GCN [12], GIN [27], DeeperGCN [67], Deep LRP [39], PNA [68], DGN [33], GINE [69], and PHC-GNN [70]. Note that those high-order GNNs [19][20][21]25] are not included here, because despite being theoretically more discriminative, these GNNs are not among the GNNs with the best empirical performance on modern large-scale graph benchmarks, and their O(n 3 ) complexity also raises a scalability issue.…”
Section: Modelsmentioning
confidence: 99%