1995
DOI: 10.1007/bf01245404
|View full text |Cite
|
Sign up to set email alerts
|

Partitioning and mapping of nested loops for linear array multicomputers

Abstract: Abstract. In distributed-memory multicomputers, minimizing interprocessor communication is the key to the efficient execution of parallel programs. In order to reduce the amount of communication overhead, parallel programs on multicomputers must be carefully scheduled by pamllelizing compilers. This paper proposes some compilation techniques for partitioning and mapping nested loops with constant data dependences onto linear array multicomputers. First, a systematic partition strategy is proposed to project an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
3
0

Year Published

2000
2000
2005
2005

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…Over the last decade, a great number of researchers paid their attention on maximizing parallelism and minimizing communication for a given program executed on a parallel machine [1,4,5,13,14,[16][17][18]21]. Chen and Sheu [4], Lim et al [13,14], Ramanujam and Sadayappan [16], and Shih, Sheu, and Huang [18] presented approaches to analyze data reference patterns on a program with structures of nested loops so that the parallelized program can be run on a parallel machine in a communication-free manner with some constraints.…”
Section: Introductionmentioning
confidence: 99%
“…Over the last decade, a great number of researchers paid their attention on maximizing parallelism and minimizing communication for a given program executed on a parallel machine [1,4,5,13,14,[16][17][18]21]. Chen and Sheu [4], Lim et al [13,14], Ramanujam and Sadayappan [16], and Shih, Sheu, and Huang [18] presented approaches to analyze data reference patterns on a program with structures of nested loops so that the parallelized program can be run on a parallel machine in a communication-free manner with some constraints.…”
Section: Introductionmentioning
confidence: 99%
“…In order to reduce the communication overhead, as far as fine grain parallelism is concerned, several methods have been proposed to group together neighboring chains of iterations [32,40], while preserving the optimal hyperplane schedule [17,41,45]. As far as coarse grain parallelism is concerned, Irigoin and Triolet proposed supernode partitioning [29] of the iteration space, where neighboring iteration points are grouped together to build a larger computation node (tile) that can be atomically executed without any intervention.…”
mentioning
confidence: 99%
“…In order to achieve maximum acceleration of the final program, one of the key issues that should be taken into account is minimization of the communication overhead, which considerably decelerates the system. As far as fine grain parallelism is concerned, in order to reduce the communication overhead, several methods have been proposed to group together neighboring chains iterations [10,7], while preserving the optimal hyperplane schedule [13,11,3]. As far as coarse grain parallelism is concerned, researchers are dealing with the problem of alleviating the communication overhead by applying the supernode or tiling transformation.…”
Section: Introductionmentioning
confidence: 99%