Seventh IEEE International Symposium on Cluster Computing and the Grid (CCGrid '07) 2007
DOI: 10.1109/ccgrid.2007.96
|View full text |Cite
|
Sign up to set email alerts
|

Reparallelization and Migration of OpenMP Programs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2008
2008
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 21 publications
0
5
0
Order By: Relevance
“…Another approach to automating the generation of parallel code was developed within the recent research on OpenMP programs and reparallelizing them for the Grid [8]. This work also covers Java programs and the use of distributed shared memory (DSM) for data exchange among tasks, but still requires from programmers dependence-free input and the explicit declaration of parallel loops via OpenMP directives.…”
Section: Resultsmentioning
confidence: 99%
“…Another approach to automating the generation of parallel code was developed within the recent research on OpenMP programs and reparallelizing them for the Grid [8]. This work also covers Java programs and the use of distributed shared memory (DSM) for data exchange among tasks, but still requires from programmers dependence-free input and the explicit declaration of parallel loops via OpenMP directives.…”
Section: Resultsmentioning
confidence: 99%
“…Some OpenMP extensions have tried to relax the requirement of keeping constant the number of threads participating in the parallel region based on the idea of reparallelization Klemm et al (2007). Such an approach requires specifying safe points along the OpenMP code where the runtime may proceed with any work repartitioning.…”
Section: Related Workmentioning
confidence: 99%
“…However, because N -the number of momentum p[i] is very large, it is quite time-consuming to complete the loop. In this work, we use OpenMP to paralyze our code, following the standard method in the article [10]. The code of a normal "for" loop is shown in the left panel of figure 1, where function A(p[i]) and function 𝐵(p[i]) are subroutines computing the right hand of the equation (13).…”
Section: Automatic Parallelization With Openmpmentioning
confidence: 99%
“…OpenMP has been very successful in exploiting structured parallelism in applications [8][9]. Particularly article [10] introduced the fundamental design of the OpenMP specification v2.5 in GCC. The implementation supports all the programming languages (C, C++, and Fortran), and it is generally available on any platform that supports Portable Operating System Interface (POSIX) threads.…”
Section: Introductionmentioning
confidence: 99%