2018
DOI: 10.48550/arxiv.1805.05208
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable

Laxman Dhulipala,
Guy E. Blelloch,
Julian Shun

Abstract: There has been significant recent interest in parallel graph processing due to the need to quickly analyze the large graphs available today. Many graph codes have been designed for distributed memory or external memory. However, today even the largest publicly-available real-world graph (the Hyperlink Web graph with over 3.5 billion vertices and 128 billion edges) can fit in the memory of a single commodity multicore server. Nevertheless, most experimental work in the literature report results on much smaller … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(7 citation statements)
references
References 83 publications
(174 reference statements)
0
7
0
Order By: Relevance
“…Algorithms & Comparison Baselines We focus on modern heuristics from Table III. For each scheme, we always pick the most competitive implementation (i.e., fewest colors used and smallest performance overheads), selecting from existing repositories, illustrated in Table IV (ColPack [82], [90], Zoltan [35], [91]- [94], original code by Hasenplaugh et al (HP) [31], GBBS with Ligra [61], [95], [96]), and our implementation. Detailed parametrizations are in the reproducibility appendix.…”
Section: A Methodology Architectures Parametersmentioning
confidence: 99%
See 4 more Smart Citations
“…Algorithms & Comparison Baselines We focus on modern heuristics from Table III. For each scheme, we always pick the most competitive implementation (i.e., fewest colors used and smallest performance overheads), selecting from existing repositories, illustrated in Table IV (ColPack [82], [90], Zoltan [35], [91]- [94], original code by Hasenplaugh et al (HP) [31], GBBS with Ligra [61], [95], [96]), and our implementation. Detailed parametrizations are in the reproducibility appendix.…”
Section: A Methodology Architectures Parametersmentioning
confidence: 99%
“…Following related work [31], [61], we assume that a parallel computation (modeled as a DAG) runs on the ideal parallel computer (machine model). Each instruction executes in unit time and there is support for concurrent reads, writes, and read-modify-write atomics (any number of such instructions finish in O(1) time).…”
Section: Models For Algorithm Analysismentioning
confidence: 99%
See 3 more Smart Citations