Proceedings of the 18th Annual International Conference on Supercomputing 2004
DOI: 10.1145/1006209.1006256
|View full text |Cite
|
Sign up to set email alerts
|

Cluster scheduling for explicitly-speculative tasks

Abstract: Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and R… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2008
2008
2009
2009

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 84 publications
(130 reference statements)
0
3
0
Order By: Relevance
“…This is a well-trodden field: the Alpha OS [8] handled time-varying utility functions; [6] discussed how to do processor scheduling for them; [18] looked at tradeoffs between multiple applications with multiple utility-dimensions; Muse [5] and Unity [24] used utility functions for continuous fine grained service quality control; and [20] described using utility functions to allow speculative execution of tasks. Unlike prior systems, our clients use aggregate utility functions to control service provider behavior across multiple jobs.…”
Section: Related Workmentioning
confidence: 99%
“…This is a well-trodden field: the Alpha OS [8] handled time-varying utility functions; [6] discussed how to do processor scheduling for them; [18] looked at tradeoffs between multiple applications with multiple utility-dimensions; Muse [5] and Unity [24] used utility functions for continuous fine grained service quality control; and [20] described using utility functions to allow speculative execution of tasks. Unlike prior systems, our clients use aggregate utility functions to control service provider behavior across multiple jobs.…”
Section: Related Workmentioning
confidence: 99%
“…The idea of scheduling based on per-job utility functions is a well-trodden field: the Alpha OS [19] handles time-varying utility functions; Chen and Muhlethaler [14] discuss how to do processor scheduling for them; Lee et al [31] look at trade-offs between multiple applications with multiple utility-dimensions; Muse [13] and Unity [48] use utility functions for continuous fine grained service quality control; and Petrou et al [40] describe using utility functions to allow speculative execution of tasks. Siena [10] considers the impact of user aggregate utility functions to control service provider behavior across multiple jobs.…”
Section: Introductionmentioning
confidence: 99%
“…At the hardware level, speculative branch execution is used to prevent bubbles from getting into an instruction pipeline. At the system level, optimism can be used to do speculative cluster scheduling to service applications such as speculative DNA sequencing [13]. At the computational domain level, optimistic simulation has been long known to produce performance benefits [10,9], under the right conditions, and is still finding applications today [18].…”
Section: Introductionmentioning
confidence: 99%