Proceedings of the 27th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming 2022
DOI: 10.1145/3503221.3508403
|View full text |Cite
|
Sign up to set email alerts
|

Scaling graph traversal to 281 trillion edges with 40 million cores

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…Next, we compare the results from our model to a real example of a highly-scalable graph analytics system. We use our model to estimate the performance of the new Sunway supercomputer running BFS on a synthetic graph with 17.56 trillion vertices and 281 trillion edges and compare our result with the performance reported by Cao et al [6].…”
Section: Use Case 3: System Evaluationmentioning
confidence: 94%
See 1 more Smart Citation
“…Next, we compare the results from our model to a real example of a highly-scalable graph analytics system. We use our model to estimate the performance of the new Sunway supercomputer running BFS on a synthetic graph with 17.56 trillion vertices and 281 trillion edges and compare our result with the performance reported by Cao et al [6].…”
Section: Use Case 3: System Evaluationmentioning
confidence: 94%
“…As the size of real-world graphs increases, future computing systems must support largescale graph processing in a cost-effective and timely manner. Today's largest benchmark graphs are on the order of 100 billion edges and 3 billion vertices [3], and some systems are processing graphs with over 200 trillion edges [6]. In addition to this enormous scale, graph processing has many challenges, such as frequent random memory accesses and low spatial and temporal locality [8].…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, academics and big technology companies like Facebook, Google, and Microsoft have proposed different solutions for organizing and analyzing the rising prevalence of big graphs [12]. Furthermore, the size of these graphs has rapidly increased, with hundreds of billions of nodes and trillions of edges being possible [13], [14]. As the graph size scales up, graph analysis can be performed in a distributed environment.…”
Section: Introductionmentioning
confidence: 99%
“…As the graph size scales up, graph analysis can be performed in a distributed environment. However, graph computing has become a challenging problem due to access irregularity, lack of locality, and intrinsic load imbalance distribution of graphs in different computing clusters [14]. Thus, researchers highlight the critical role of design computing systems in our society today [12].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation