Proceedings of the 32nd ACM SIGPLAN Conference on Programming Language Design and Implementation 2011
DOI: 10.1145/1993498.1993501
|View full text |Cite
|
Sign up to set email alerts
|

The tao of parallelism in algorithms

Abstract: For more than thirty years, the parallel programming community has used the dependence graph as the main abstraction for reasoning about and exploiting parallelism in "regular" algorithms that use dense arrays, such as finite-differences and FFTs. In this paper, we argue that the dependence graph is not a suitable abstraction for algorithms in new application areas like machine learning and network analysis in which the key data structures are "irregular" data structures like graphs, trees, and sets.To address… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
48
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 214 publications
(48 citation statements)
references
References 60 publications
0
48
0
Order By: Relevance
“…We modified all of the data structures used by the router to be thread-safe and compatible with Galois. We rewrote the router to be compliant with the operator formalism [14], which is a central requirement of Galois. We implemented the RRG using Galois' graph model, and the STM-based non-blocking PQ as discussed in Section 4.2.…”
Section: Experimental Setup 51 Vpr Implementation In Galoismentioning
confidence: 99%
See 2 more Smart Citations
“…We modified all of the data structures used by the router to be thread-safe and compatible with Galois. We rewrote the router to be compliant with the operator formalism [14], which is a central requirement of Galois. We implemented the RRG using Galois' graph model, and the STM-based non-blocking PQ as discussed in Section 4.2.…”
Section: Experimental Setup 51 Vpr Implementation In Galoismentioning
confidence: 99%
“…It is an irregular algorithm, in the sense that it operates on a sparse graph, typically implemented using a linked data structure. Irregular algorithms do exhibit significant parallelism, but are difficult to parallelize statically [14] because the amount of parallelism depends on the content of the data structure (e.g., graph topology) as well as the operations performed on the elements of the data structure at runtime.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, taking advantage of such support for multithreading is not always a straight-ahead decision since it often depends on the target platform, the input data set-a collection of data-, among other details. Moreover, in irregular applications-programs that manipulate pointerbased data structures, such as graphs and trees-the amount of parallelism is not predictable at compile time-it is very input dependent [22]-and may be subject of considerable variations at runtime [23]. Unfortunately, neither traditional programming models nor compilers are sufficient to leverage the complexity of exploiting all levels of parallelism in such applications [24][25] [22].…”
Section: Introductionmentioning
confidence: 99%
“…Algorithms are formulated as iterative computations on work-sets, and each iteration is identified as a quantum of work (task) that can potentially be executed in parallel with other iterations. The Galois project [18] has shown that algorithms formulated in this way can be parallelized automatically using optimistic parallelization): iterations are executed speculatively in parallel and, when an iteration conflicts with concurrently executing iterations, it is rolled-back. Algorithms that have been successfully parallelized in this manner include Survey propagation [5], Boruvka's algorithm [6], Delauney triangulation and refinement [12], and Agglomerative clustering [21].…”
Section: Introductionmentioning
confidence: 99%