2021
DOI: 10.48550/arxiv.2111.00680
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GCNear: A Hybrid Architecture for Efficient GCN Training with Near-Memory Processing

Abstract: Recently, Graph Convolutional Networks (GCNs) have become state-of-the-art algorithms for analyzing noneuclidean graph data. However, it is challenging to realize efficient GCN training, especially on large graphs. The reasons are many-folded: 1) GCN training incurs a substantial memory footprint. Full-batch training on large graphs even requires hundreds to thousands of gigabytes of memory to buffer the intermediate data for back-propagation. 2) GCN training involves both memory-intensive data reduction and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 42 publications
(66 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?