1997
DOI: 10.1006/jpdc.1997.1367
|View full text |Cite
|
Sign up to set email alerts
|

A Library-Based Approach to Task Parallelism in a Data-Parallel Language

Abstract: The data-parallel language High Performance Fortran (HPF) does not allow e cient expression of mixed task/data-parallel computations or the coupling of separately compiled data-parallel modules. In this paper, we show how these common parallel program structures can be represented, with only minor extensions to the HPF model, by using a coordination library based on the Message Passing Interface (MPI). This library allows data-parallel tasks to exchange distributed data structures using calls to simple communi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

1999
1999
2008
2008

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 30 publications
(24 citation statements)
references
References 22 publications
0
24
0
Order By: Relevance
“…Tab. 4 shows the evaluation process of P ipe(it f , it d ) labelling each transformation with its cost load.…”
Section: Computational Costs Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…Tab. 4 shows the evaluation process of P ipe(it f , it d ) labelling each transformation with its cost load.…”
Section: Computational Costs Analysismentioning
confidence: 99%
“…On the other hand, a parallel application have also to deal with data access concerns that can heavily influence both the programming phase and the final computational costs. Unfortunately, the most works regarding structured parallel programming environments, define a two-tier architecture model in which data accesses have to be heavily detailed on top of control abstractions [2,3] or vice versa [4,5,6,7]. As a consequence, the model of parallelism is a mixture of strictly coupled data and control concerns difficult to formalize and, hence, to statically and/or dynamically analyze for optimization purpose.…”
Section: Introductionmentioning
confidence: 99%
“…The fft 2d component of the example described above, can be structured as a pipeline of two stages in order to increase its efficiency, as proved in different works [11] [5]. The first one performs independent one-dimensional FFTs on the columns of the input arrays.…”
Section: Configuration Viewmentioning
confidence: 99%
“…This model is typically used on clusters of shared memory machines, where a shared memory model is used within a node and a message passing model is used to exchange data between nodes. Examples of hybrid models include OpenMP with MPI [23] and HPF with MPI [20].…”
Section: Hybrid Modelsmentioning
confidence: 99%