Proceedings of the 26th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming 2021
DOI: 10.1145/3437801.3441585
|View full text |Cite
|
Sign up to set email alerts
|

Understanding and bridging the gaps in current GNN performance optimizations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
32
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 60 publications
(32 citation statements)
references
References 33 publications
0
32
0
Order By: Relevance
“…Use of Graph Processing in GCN Training: State-of-theart GCN training systems on GPUs [5], [16], [17] typically adopt the vertex-centric programming model to conduct training on input graphs. In GCN training, the number of vertex features can be up to hundreds, which is much larger than that in traditional graph algorithms, which leads to high atomic overheads if the push mode is used.…”
Section: B Graph Processing Systems On Gpusmentioning
confidence: 99%
See 3 more Smart Citations
“…Use of Graph Processing in GCN Training: State-of-theart GCN training systems on GPUs [5], [16], [17] typically adopt the vertex-centric programming model to conduct training on input graphs. In GCN training, the number of vertex features can be up to hundreds, which is much larger than that in traditional graph algorithms, which leads to high atomic overheads if the push mode is used.…”
Section: B Graph Processing Systems On Gpusmentioning
confidence: 99%
“…Besides, different from the graph processing systems that generally invoke one software thread to conduct the processing of a vertex, GCN training systems typically divide the features of a vertex into multiple threads for parallel processing. Moreover, for load-balancing, the neighbor grouping technique used in graph processing is also used by recent GCN training systems on GPUs [5], [17]. Similarly, the use of neighbor grouping also leads to atomic overheads on updating the data attached to the vertices as in graph processing systems.…”
Section: B Graph Processing Systems On Gpusmentioning
confidence: 99%
See 2 more Smart Citations
“…FuseGNN (Chen et al, 2020) fuses edge operators to accelerate GNN computation, but it lacks the technique to fuse a vertex-centric operator with an edge-centric one. Huang et al, (Huang et al, 2021) also proposes fusion technique for GNNs, while it cannot handle GNN training because the intermediate data are missing.…”
Section: Introductionmentioning
confidence: 99%