2009 International Conference on Parallel Processing 2009
DOI: 10.1109/icpp.2009.19
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Scheduling of Nested Parallel Loops on Multi-Core Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2011
2011
2013
2013

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…This approach can be systematized by firstly, determining parallel inner do loops that can benefit from parallelism, i.e., loops that have a workload higher than the overhead involved by their parallel implementation [3], secondly, partitioning the iterations of the selected parallel loops among concurrent threads. Thirdly, privatizing auxiliary data that don't require global updates, i.e., reads and writes from multiple participating threads and, finally, protecting global shared data updates using locks.…”
Section: Data-sharing Approachmentioning
confidence: 99%
“…This approach can be systematized by firstly, determining parallel inner do loops that can benefit from parallelism, i.e., loops that have a workload higher than the overhead involved by their parallel implementation [3], secondly, partitioning the iterations of the selected parallel loops among concurrent threads. Thirdly, privatizing auxiliary data that don't require global updates, i.e., reads and writes from multiple participating threads and, finally, protecting global shared data updates using locks.…”
Section: Data-sharing Approachmentioning
confidence: 99%
“…Existing compiler and run-time systems tune parameters such as the degree of parallelism of data-parallel (DOALL) loops and block size of loops, either statically or dynamically, to match the execution environment [3,5,14,17,41,44]. These systems are limited to optimizing array-based programs with communicationfree data-parallelism, where the performance impact of those parameters can be relatively easily modeled.…”
Section: Introductionmentioning
confidence: 99%