1998
DOI: 10.1007/bfb0053990
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic coscheduling on workstation clusters

Abstract: Coscheduling has been shown to be a critical factor in achieving efficient parallel execution in timeshared environments [12,19,4]. However, the most common approach, gang scheduling, has limitations in scaling, can compromise good interactive response, and requires that communicating processes be identified in advance.We explore a technique called dynamic coscheduling (DCS) which produces emergent coscheduling of the processes constituting a parallel job. Experiments are performed in a workstation environment… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
78
0

Year Published

1999
1999
2011
2011

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 79 publications
(78 citation statements)
references
References 20 publications
0
78
0
Order By: Relevance
“…For this reason, if the task to be inserted on the RQ has any incoming message in the RMQ queue (Ø × Ö ¼µ and the main memory in such node is overloaded (AE Ñ Ñ AE Å ), the inserted task is led to the top of the RQ (ÊÉ ¼℄µ. Thus, CSM applies a dynamic technique [15] to ensure that fine-grained distributed applications are coscheduled. In this technique, the more scheduling priority is assigned to tasks the more the receiving frequency is.…”
Section: Algorithm 1 Is Implemented Inside a Generic Routine (Called mentioning
confidence: 99%
See 1 more Smart Citation
“…For this reason, if the task to be inserted on the RQ has any incoming message in the RMQ queue (Ø × Ö ¼µ and the main memory in such node is overloaded (AE Ñ Ñ AE Å ), the inserted task is led to the top of the RQ (ÊÉ ¼℄µ. Thus, CSM applies a dynamic technique [15] to ensure that fine-grained distributed applications are coscheduled. In this technique, the more scheduling priority is assigned to tasks the more the receiving frequency is.…”
Section: Algorithm 1 Is Implemented Inside a Generic Routine (Called mentioning
confidence: 99%
“…Thus, coscheduling may be applied to reduce messages waiting time and make good use of the idle CPU cycles when distributed applications are executed in a cluster or NOW system. Coscheduling decisions are made taking implicit runtime information of the jobs into account, basically execution CPU cycles and communication events [12,13,14,15,16,22]. Our framework will be focused on an implicit coscheduling environment, such as scheduling the correspondents -the most recent communicated processes-in the overall system at the same time, taking into account both high message communication frequency and low penalty introduction into the delayed processes.…”
Section: Introductionmentioning
confidence: 99%
“…Two main strategies for coordinating individual local schedulers have been proposed: gang scheduling (GS) [8], [14] and communication-driven coscheduling (CDC) [2]- [5], [11], [12], [15], [19]. GS uses explicit global synchronization to schedule all the processes of a job simultaneously.…”
Section: Introductionmentioning
confidence: 99%
“…Explicit gang-scheduling systems always run all the tasks of a job simultaneously [5-10, 17, 21]. In contrast, communication-driven systems [3,19,20] are more loosely coupled, and schedule tasks based on message arrival. Independent of the inter-task scheduling approach, we address those cases in which all processes within a task are closely coupled, like the csh example of the first paragraph.…”
Section: Introductionmentioning
confidence: 99%