2021
DOI: 10.48550/arxiv.2110.09524
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Understanding GNN Computational Graph: A Coordinated Computation, IO, and Memory Perspective

Abstract: Graph Neural Networks (GNNs) have been widely used in various domains, and GNNs with sophisticated computational graph lead to higher latency and larger memory consumption. Optimizing the GNN computational graphs suffers from: (1) Redundant neural operator computation. The same data are propagated through the graph structure to perform the same neural operation multiple times in GNNs, leading to redundant computation which accounts for 92.4% of total operators. (2) Inconsistent thread mapping. Efficient thread… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…Instead, they target GNN computations from scratch, focusing on GNN-specific workload characteristics and design decisions [113], [171], [172], [215]. For example, Zhang et al [238] analyze the computational graph of GNNs, and propose optimizations tailored specifically for GNNs.…”
Section: Referencementioning
confidence: 99%
See 2 more Smart Citations
“…Instead, they target GNN computations from scratch, focusing on GNN-specific workload characteristics and design decisions [113], [171], [172], [215]. For example, Zhang et al [238] analyze the computational graph of GNNs, and propose optimizations tailored specifically for GNNs.…”
Section: Referencementioning
confidence: 99%
“…While most systems fuse operators within a single GNN layer, QGTC offers operator fusion also across different GNN layers [214]. Other interesting schemes include operator reorganization [238], in which one ensures that operators first perform neural operations (e.g., MLP) and only then the updated feature vectors are propagated. This limits redundant computation.…”
Section: Optimizations In Gnn Systemsmentioning
confidence: 99%
See 1 more Smart Citation