1987
DOI: 10.1109/tc.1987.5009495
|View full text |Cite
|
Sign up to set email alerts
|

Guided Self-Scheduling: A Practical Scheduling Scheme for Parallel Supercomputers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
292
0
9

Year Published

1996
1996
2017
2017

Publication Types

Select...
7
3

Relationship

1
9

Authors

Journals

citations
Cited by 518 publications
(301 citation statements)
references
References 10 publications
0
292
0
9
Order By: Relevance
“…tion boundaries; the applications examine and adjust to the number of available processors each time they begin an iteration, but do not do so while executing any one iteration. It is clearly possible to do much more dynamic scheduling, e.g., [20,1,6, 14]; we did not do so because of the very large incremental implementation cost relative to our more restrictive change, and because we expect that ST-EQUI would perform even better when jobs are more responsive to changes in their allocations. (Of the three policies, ST-EQUI reallocates processors most frequently, and is therefore most sensitive to the latency with which applications can respond to changing allocations.…”
Section: Methodsmentioning
confidence: 99%
“…tion boundaries; the applications examine and adjust to the number of available processors each time they begin an iteration, but do not do so while executing any one iteration. It is clearly possible to do much more dynamic scheduling, e.g., [20,1,6, 14]; we did not do so because of the very large incremental implementation cost relative to our more restrictive change, and because we expect that ST-EQUI would perform even better when jobs are more responsive to changes in their allocations. (Of the three policies, ST-EQUI reallocates processors most frequently, and is therefore most sensitive to the latency with which applications can respond to changing allocations.…”
Section: Methodsmentioning
confidence: 99%
“…IMSAME uses a modified Guided Self Scheduling (GSS) [19] to handle workload assignment. In this line, the query metagenome (the reference metagenome is only processed to generate the hash table) is separated into M partitions each of which will contain m i subpartitions, with i ranging from 1 to M .…”
Section: Dynamic Workload Partitioning and Distribution To Threadsmentioning
confidence: 99%
“…To account for a good workload balancing, the different loop indexes are associated to the concurrent threads using the guided self-scheduling algorithm [41] available in OpenMP. For the class Complex, see Figure 1, we use the C++ standard complex class.…”
Section: Begin_algorithm // Tridiagonalization Of the N X N A Matrix mentioning
confidence: 99%