2008
DOI: 10.1145/1353535.1346311
|View full text |Cite
|
Sign up to set email alerts
|

Optimistic parallelism benefits from data partitioning

Abstract: Recent studies of irregular applications such as finite-element mesh generators and data-clustering codes have shown that these applications have a generalized data parallelism arising from the use of iterative algorithms that perform computations on elements of worklists. In some irregular applications, the computations on different elements are independent. In other applications, there may be complex patterns of dependences between these computations.The Galois system was designed to exploit this kind of irr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2009
2009
2016
2016

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(14 citation statements)
references
References 33 publications
0
14
0
Order By: Relevance
“…The FailedAt attribute of t indicates which live-in variables caused the speculation to fail. Since, when each live-in variable is first read, the instruction address of the read and the space ID before the read are recorded (Figure 10, line 2), the main thread can retrieve these two values of the first accessed live-in variable by calling another two auxiliary functions GetRecoveryPC and getRecoverySpaceID (lines [16][17]. These two values determine the starting point of the reexecution.…”
Section: Handling Speculative Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The FailedAt attribute of t indicates which live-in variables caused the speculation to fail. Since, when each live-in variable is first read, the instruction address of the read and the space ID before the read are recorded (Figure 10, line 2), the main thread can retrieve these two values of the first accessed live-in variable by calling another two auxiliary functions GetRecoveryPC and getRecoverySpaceID (lines [16][17]. These two values determine the starting point of the reexecution.…”
Section: Handling Speculative Resultsmentioning
confidence: 99%
“…Instead of state separation, Kulkarni et al proposed a rollback based speculative parallelization technique [15][16][17][18]21]. They introduce two special constructs that users can employ to identify speculative parallelism.…”
Section: Related Workmentioning
confidence: 99%
“…Partitioned global address space (PGAS) [6,7] is a set of parallel programming models which aim to combine the performance advantage of MPI with the programmability of a shared-memory model. Galois [14,15] introduces a programming model to exploit the data parallelism in irregular applications. SpiceC [10] is a recently proposed parallel programming model for both multicores and manycores.…”
Section: Related Workmentioning
confidence: 99%
“…Each partition has a lock that must be acquired by a thread that wants to access a node or edge in that partition. The over-decomposition permits a thread to do useful work even if one of its partitions is temporarily locked by a different thread [15]. It is difficult to partition the underlying graph in the Boruvka MST algorithm since it is ultimately coalesced into a single node, so we do not use partitioning for this algorithm; instead we associate locks with nodes and edges of the graph.…”
Section: Experimental Evaluationmentioning
confidence: 99%