2009 17th Euromicro International Conference on Parallel, Distributed and Network-Based Processing 2009
DOI: 10.1109/pdp.2009.56
|View full text |Cite
|
Sign up to set email alerts
|

Impact of the Memory Hierarchy on Shared Memory Architectures in Multicore Programming Models

Abstract: Many and multicore architectures put a big pressure in parallel programming but gives a unique opportunity to propose new programming models that automatically exploit the parallelism of these architectures. OpenMP is a very well known standard that exploits parallelism in shared memory architectures. SMPSs has recently been proposed as a task based programming model that exploits the parallelism at the task level and takes into account data dependencies between tasks. However, besides parallelism in the progr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2009
2009
2014
2014

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…In this case, the input, output, inout clauses are used to calculate the task data dependencies in order to build the task DAG. However, this does not mean that accessing data locally in NUMA systems is not important, as is shown by Badia et al (2009) for the SMPSs case when used in an SGI Altix.…”
Section: Smpss Runtime Specificsmentioning
confidence: 99%
“…In this case, the input, output, inout clauses are used to calculate the task data dependencies in order to build the task DAG. However, this does not mean that accessing data locally in NUMA systems is not important, as is shown by Badia et al (2009) for the SMPSs case when used in an SGI Altix.…”
Section: Smpss Runtime Specificsmentioning
confidence: 99%
“…Subsets of cores in a multicore machine may share different layers of memory levels. For example, usually, a small subset of cores shares L2 caches, while another subset of higher cardinality may share L3 caches, being the global memory shared by all the cores of the machine [33,34,35,36]. The modeling of such memory hierarchy sharing is still a challenge [1].…”
Section: Multicore Architectures -Models For Distributed and Shared M...mentioning
confidence: 99%
“…[2]. The applications have been written in StarSs [3], a task-level data-flow programming model similar to Cilk [4], RapidMind [5], Sequoia [6], and Tflux-DDM [7]. These programming models let the programmer write a seemingly sequential program, and annotate the input and output parameters of functions that can potentially execute as parallel tasks.…”
Section: Introductionmentioning
confidence: 99%