1994
DOI: 10.1109/71.282559
|View full text |Cite
|
Sign up to set email alerts
|

Job scheduling is more important than processor allocation for hypercube computers

Abstract: Managing computing resources in a hypercube entails two steps. First, a job must be chosen to execute from among those waiting (job scheduling). Next a particular subcube within the hypercube must be allocated to that job (processor allocaabn). Whereas processor allocation has been well studied, job scheduling has been largely neglected. The goal of this paper is to compare the roles of processor allocation and job scheduling in achieving good performance on hypercube computers.We show that job scheduling has … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
65
0

Year Published

1997
1997
2015
2015

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 78 publications
(66 citation statements)
references
References 15 publications
1
65
0
Order By: Relevance
“…But what if there is a correlation between size and running time? If this is an inverse correlation, we find a win-win situation: the larger jobs are also shorter, so packing them first is statistically similar to using SJF (shortest job first), which is known to lead to the minimal average runtime [418]. But if size and runtime are correlated, and large jobs run longer, scheduling them first may cause significant delays for subsequent smaller jobs, leading to dismal average performance results [449].…”
Section: Correlations In Workloadsmentioning
confidence: 98%
See 2 more Smart Citations
“…But what if there is a correlation between size and running time? If this is an inverse correlation, we find a win-win situation: the larger jobs are also shorter, so packing them first is statistically similar to using SJF (shortest job first), which is known to lead to the minimal average runtime [418]. But if size and runtime are correlated, and large jobs run longer, scheduling them first may cause significant delays for subsequent smaller jobs, leading to dismal average performance results [449].…”
Section: Correlations In Workloadsmentioning
confidence: 98%
“…In this case a reasonable scheduling algorithm is to cycle through the different sizes, because the jobs of each size pack well together [418]. This works well for negatively correlated and even uncorrelated workloads, but is bad for positively correlated workloads [418,449]. The reason is that under a positive correlation the largest jobs dominate the machine for a long time, blocking out all others.…”
Section: Example 1: Scheduling Parallel Jobs By Sizementioning
confidence: 99%
See 1 more Smart Citation
“…In this study, it is assumed that parallel jobs are selected for allocation and execution using the FirstCome-First-Served (FCFS) and Shortest-ServiceDemand (SSD) (i.e., shortest execution times) scheduling strategies. FCFS is chosen because it is fair and it is widely used in other similar studies [4,5,17,19], while SSD is adopted because it is expected to reduce performance loss due to FCFS blocking [10].…”
Section: Preliminariesmentioning
confidence: 99%
“…Doing so has the potential to eliminate interjob communication contention if each job's communication is routed entirely within the set of processors assigned to that job. Unfortunately, requiring that jobs be allocated to convex sets of processors reduces system utilization to levels unacceptable for any government-audited system [14,29].…”
Section: Allocation Algorithmsmentioning
confidence: 99%