2006
DOI: 10.1142/s0129054106003838
|View full text |Cite
|
Sign up to set email alerts
|

Critical Path Scheduling Parallel Programs on an Unbounded Number of Processors

Abstract: In this paper we present an efficient algorithm for compile-time scheduling and clustering of parallel programs onto parallel processing systems with distributed memory, which is called The Dynamic Critical Path Scheduling DCPS. The DCPS is superior to several other algorithms from the literature in terms of computational complexity, processors consumption and solution quality. DCPS has a time complexity of O(e + v log v), as opposed to DSC algorithm O((e+v) log v) which is the best known algorithm. Experiment… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2008
2008
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…A natural idea is therefore to determine first which nodes should always be executed on the same processor before the actual scheduling. Task clustering is a technique that follows this idea [10,11], and it is often used for scheduling tasks on an unlimited number of processors [12,13,14]. Nevertheless, it is proposed as an initial step in scheduling when the number of processors is bounded [7].…”
Section: Scheduling Modelmentioning
confidence: 99%
“…A natural idea is therefore to determine first which nodes should always be executed on the same processor before the actual scheduling. Task clustering is a technique that follows this idea [10,11], and it is often used for scheduling tasks on an unlimited number of processors [12,13,14]. Nevertheless, it is proposed as an initial step in scheduling when the number of processors is bounded [7].…”
Section: Scheduling Modelmentioning
confidence: 99%
“…The critical path (CP) is the longest directed path from the entry task with no incoming edge to the final task with no outgoing edge [16,34]. It is composed of the tasks:…”
Section: Task T Kkmentioning
confidence: 99%
“…A common approach is to decompose the original program into smaller computations and then to construct a distributed schedule for the computations [5,15,20,21,28,53,60,64,69]. The distributed scheduling task consists of several subtasks: assigning each computation to a processor, chronologically ordering the computations on each processor, and scheduling the data movement so that each computation has the necessary data when it executes.…”
Section: Research Motivation and Target Problemmentioning
confidence: 99%