1997
DOI: 10.1006/jpdc.1997.1335
|View full text |Cite
|
Sign up to set email alerts
|

Parallel Application Scheduling on Networks of Workstations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0
4

Year Published

1998
1998
2015
2015

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(22 citation statements)
references
References 27 publications
0
18
0
4
Order By: Relevance
“…2 cannot start processing the ®rst tile of the ith row of f 2 before 1 has computed the last tile of the ith row of f 1 and has sent that data to 2 , that is, at time-step s 1 i 1 t 1 com . Since 2 starts processing the ®rst row of f 2 at time s 2 , where s 2 P s 1 1 t 1 com , it is not delayed by 1 .…”
Section: Heuristic Allocation By Block Of Columnsmentioning
confidence: 99%
See 1 more Smart Citation
“…2 cannot start processing the ®rst tile of the ith row of f 2 before 1 has computed the last tile of the ith row of f 1 and has sent that data to 2 , that is, at time-step s 1 i 1 t 1 com . Since 2 starts processing the ®rst row of f 2 at time s 2 , where s 2 P s 1 1 t 1 com , it is not delayed by 1 .…”
Section: Heuristic Allocation By Block Of Columnsmentioning
confidence: 99%
“…Distributing the computations (together with the associated data) can be performed either dynamically or statically, or a mixture of both. At ®rst sight, we may think that dynamic strategies like a greedy algorithm are likely to perform better, because the machine loads will be self-regulated, hence self-balanced, if processors pick up new tasks just as they terminate their current computation (see the survey paper of Berman [5] and the more specialized references [2,12] for further details). However, data dependences may lead to slow the whole process down to the pace of the slowest processor, as we demonstrate in Section 4.…”
Section: Introductionmentioning
confidence: 99%
“…With adaptive partitioning processors in the system are not divided before the computation. When a new job arrives, a job manager in the system first locates idle processors and then allocates certain number of those idle processors to that job according to some processor allocation policies, e.g., those described in [2,10,14,15,17,18,20]. Therefore, the boundary lines are drawn during the computation and will disappear after the job is terminated.…”
Section: Introductionmentioning
confidence: 99%
“…Some representative examples of generic schedulers include [37], [38]. The techniques proposed in [39], [40] consider the class of malleable jobs where the number processors provisioned can be varied at runtime. Similarly, the scheduling techniques presented in [41], [42] consider moldable jobs that can be run on different number of processors.…”
Section: Related Workmentioning
confidence: 99%