2021
DOI: 10.1007/978-3-030-82136-4_17
|View full text |Cite
|
Sign up to set email alerts
|

Improved Partitioning Graph Embedding Framework for Small Cluster

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…Finally, all works that scale training beyond CPU memory utilize some form of graph partitioning [22]. Sun et al [35] utilize partition recombination to improve shallow model quality in comparison to static partitions used by PyTorch BigGraph. This method is similar to our two-level partition abstraction, however we extend support to GNNs and analyze the effect of two-level policies on training time and accuracy.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, all works that scale training beyond CPU memory utilize some form of graph partitioning [22]. Sun et al [35] utilize partition recombination to improve shallow model quality in comparison to static partitions used by PyTorch BigGraph. This method is similar to our two-level partition abstraction, however we extend support to GNNs and analyze the effect of two-level policies on training time and accuracy.…”
Section: Related Workmentioning
confidence: 99%
“…Replacement Policies for Disk-based Graph Learning To scale training beyond CPU memory, Marius++ supports disk-based training for GNNs. Disk-based training requires that the graph is split into multiple node partitions [22,29,35]. Across training iterations, a subset of partitions is transferred to CPU memory and mixed CPU-GPU training is performed on training data obtained by the induced subgraph.…”
Section: Introductionmentioning
confidence: 99%