2003
DOI: 10.1007/3-540-45009-2_17
|View full text |Cite
|
Sign up to set email alerts
|

Experiences Using OpenMP Based on Compiler Directed Software DSM on a PC Cluster

Abstract: In this work we repo t on our experiences running OpenMP programs on a commodity cluster of PCs running a softw_,re distributed shared memory (DSM) system. We describe our test environment and report on the performance of a subset of the NAS Parallel Benchmarks that have been automaticaly parall elized for OpenMP. We compare the performance of the OpenMP implementations with that of tac_r message passing counterparts and discuss performance differences. 1 Introduction Computer Architectures usi_g clusters of P… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
11
0

Year Published

2004
2004
2015
2015

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(14 citation statements)
references
References 12 publications
3
11
0
Order By: Relevance
“…Although we have not been able to do a direct comparison of our results with other SDSM system, we can tell that similar results were obtained by Müller et al using a relaxed consistency SDSM [7].…”
Section: Epsupporting
confidence: 58%
See 2 more Smart Citations
“…Although we have not been able to do a direct comparison of our results with other SDSM system, we can tell that similar results were obtained by Müller et al using a relaxed consistency SDSM [7].…”
Section: Epsupporting
confidence: 58%
“…The most significant ones are the OpenMP translator developed by Hu et al [8], OpenMP on the SCASH system [7,14], and ParADE [11].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Several combinations of OpenMP runtime plus SDSM systems have been implemented [12]. The most significant ones are the OpenMP translator developed by Hu et al [13] on top of Treadmarks [17], OpenMP on the SCASH system [11], and ParADE [16]. There is also NanosDSM [9] which uses sequential-semantic memory consistency.…”
Section: Related Workmentioning
confidence: 99%
“…One approach uses MPI on top of OpenMP: MPI distributes tasks to cluster-nodes, while OpenMP distributes the tasks further within each node. In the second approach, OpenMP establishes a cluster-wide Distributed Shared-Memory (DSM), which is implemented using MPI [9]. While the second approach is attractive due to OpenMP's ease of programming, its main disadvantage is the complexity and overhead to support DSM in large-scale configurations.…”
Section: Introductionmentioning
confidence: 99%