Proceedings of the 30th on Symposium on Parallelism in Algorithms and Architectures 2018
DOI: 10.1145/3210377.3210393
|View full text |Cite
|
Sign up to set email alerts
|

Parallel Minimum Cuts in Near-linear Work and Low Depth

Abstract: We present the first near-linear work and poly-logritharithmic depth algorithm for computing a minimum cut in a graph, while previous parallel algorithms with poly-logarithmic depth required at least quadratic work in the number of vertices.In a graph with n vertices and m edges, our algorithm computes the correct result with high probability in O(m log 4 n) work and O(log 3 n) depth. This result is obtained by parallelizing a data structure that aggregates weights along paths in a tree and by exploiting the c… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
21
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(25 citation statements)
references
References 31 publications
2
21
0
Order By: Relevance
“…This result generalizes and improves on Geissmann and Gianinazzi [14] who give an algorithm for evaluating a batch of π‘˜ pathweight updates and queries in Ξ©(π‘˜ log 2 𝑛) work.…”
Section: Introductionsupporting
confidence: 66%
See 1 more Smart Citation
“…This result generalizes and improves on Geissmann and Gianinazzi [14] who give an algorithm for evaluating a batch of π‘˜ pathweight updates and queries in Ξ©(π‘˜ log 2 𝑛) work.…”
Section: Introductionsupporting
confidence: 66%
“…A new wave of interest in the problem has recently pushed these frontiers. Geissmann and Gianinazzi [14] design a parallel algorithm for minimum 2-respecting cuts that performs 𝑂 (π‘š log 3 𝑛) work in 𝑂 (log 2 𝑛) depth. Their algorithm is based on parallelizing Karger's algorithm by replacing a sequential data structure for the so-called minimum path problem, based on dynamic trees, with a data structure that can evaluate a batch of updates and queries in parallel.…”
Section: Introductionmentioning
confidence: 99%
“…We get an algorithm with O(log 3 n) depth and O(m log n + n log 4 n) work. This improves on the work complexity of the state of the art algorithm of Geissman and Gianinazzi [GG18], which has O(log 3 n) depth and O(m log 4 n) work.…”
Section: Improvements In Pram Parallel Algorithmsmentioning
confidence: 89%
“…For the standard PRAM model of parallel algorithms (Concurrent Write Exclusive Read), our algorithm improves the total work while achieving the same depth complexity as the state of the art [GG18]. We get an algorithm with O(log 3 n) depth and O(m log n + n log 4 n) work.…”
Section: Improvements In Pram Parallel Algorithmsmentioning
confidence: 96%
See 1 more Smart Citation