Proceedings of the 15th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming 2010
DOI: 10.1145/1693453.1693457
|View full text |Cite
|
Sign up to set email alerts
|

Structure-driven optimizations for amorphous data-parallel programs

Abstract: Irregular algorithms are organized around pointer-based data structures such as graphs and trees, and they are ubiquitous in applications. Recent work by the Galois project has provided a systematic approach for parallelizing irregular applications based on the idea of optimistic or speculative execution of programs. However, the overhead of optimistic parallel execution can be substantial. In this paper, we show that many irregular algorithms have structure that can be exploited and present three key optimiza… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
44
0

Year Published

2010
2010
2017
2017

Publication Types

Select...
4
3
2

Relationship

3
6

Authors

Journals

citations
Cited by 50 publications
(44 citation statements)
references
References 24 publications
0
44
0
Order By: Relevance
“…Instead of state separation, Kulkarni et al proposed a rollback based speculative parallelization technique [15][16][17][18]21]. They introduce two special constructs that users can employ to identify speculative parallelism.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Instead of state separation, Kulkarni et al proposed a rollback based speculative parallelization technique [15][16][17][18]21]. They introduce two special constructs that users can employ to identify speculative parallelism.…”
Section: Related Workmentioning
confidence: 99%
“…We will refer to this strategy as discard-all. Some other works employ rollback to recover from misspeculation [15,18,21]. The rollback cost is minimized by having the programmer exploit application specific knowledge in providing the rollback code.…”
Section: Introductionmentioning
confidence: 99%
“…Since each activity performs a relatively small computation, the overhead of adding and removing work from the centralized work-list can be substantial. To reduce this performance penalty, we use iteration coalescing [23], which can be viewed as a data-centric version of loop chunking [27]. When an activity adds an edge to the graph, it checks to see if the new edge violates any invariants at the source node.…”
Section: Optimizationsmentioning
confidence: 99%
“…This paper describes a parallel PathFinder implementation using the open source Galois framework [12,14]. Galois' programming model, compiler, and runtime synergistically accelerate irregular algorithms that dynamically modify linked-based data structures.…”
Section: Introductionmentioning
confidence: 99%