Proceedings of the 7th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments 2011
DOI: 10.1145/1952682.1952705
|View full text |Cite
|
Sign up to set email alerts
|

Minimal-overhead virtualization of a large scale supercomputer

Abstract: Virtualization has the potential to dramatically increase the usability and reliability of high performance computing (HPC) systems. However, this potential will remain unrealized unless overheads can be minimized. This is particularly challenging on large scale machines that run carefully crafted HPC OSes supporting tightlycoupled, parallel applications. In this paper, we show how careful use of hardware and VMM features enables the virtualization of a large-scale HPC system, specifically a Cray XT4 machine, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2011
2011
2019
2019

Publication Types

Select...
4
3
3

Relationship

2
8

Authors

Journals

citations
Cited by 49 publications
(26 citation statements)
references
References 17 publications
0
26
0
Order By: Relevance
“…Consequently to obtain a good performance in the supporting infrastructure for processing big quantities of data, such as low latency and high throughput, some management enhancements are needed to operate more intelligently the available computing resources in each data center, such as virtual machines (Dai et al, 2013), memory (Zhou & Li, 2013), CPU scheduling (Bae et al, 2012), and cache (Koller et al, 2011), I/O (Ram et al, 2013). Other very important functional aspect to be aware in data centers is to enhance the network performance (Marx, 2013;Lange et al, 2011;Saleem, Hassan, & Asirvadam, 2011). To enhance this performance in a more flexible way, the network resources should become virtualized.…”
Section: Data Centersmentioning
confidence: 99%
“…Consequently to obtain a good performance in the supporting infrastructure for processing big quantities of data, such as low latency and high throughput, some management enhancements are needed to operate more intelligently the available computing resources in each data center, such as virtual machines (Dai et al, 2013), memory (Zhou & Li, 2013), CPU scheduling (Bae et al, 2012), and cache (Koller et al, 2011), I/O (Ram et al, 2013). Other very important functional aspect to be aware in data centers is to enhance the network performance (Marx, 2013;Lange et al, 2011;Saleem, Hassan, & Asirvadam, 2011). To enhance this performance in a more flexible way, the network resources should become virtualized.…”
Section: Data Centersmentioning
confidence: 99%
“…This opportunity is leveraged in an automatic adaptive system that chooses between these two mappings. We implemented and evaluated in the context of the Palacios VMM [1,2,3] to do this. We demonstrate that the performance of SPEC and PARSEC benchmarks can be increased by as much as 66%, energy reduced by as much as 31%, and power reduced by as much as 17%, depending on the optimization objective.…”
Section: Dynamic Adaptive Vcore Mapping For Various Objectivesmentioning
confidence: 99%
“…Like L4, Kitten also provides a native environment that can be used to develop customizations. The most notable differences are: that Palacios and Kitten are being developed for x86-based HPC systems rather than for a highly-specialized Supercomputer platform such as BG/P; that Palacios runs as a kernel module in Kitten's privileged domain, whereas our VMM runs completely decomposed and deprivileged as user-level process; and that their approach to device virtualization introduces a scheme relying on guest cooperation in order to achieving high-performance virtualization of HPC network devices [20], whereas our approach currently fully intercepts guest device accesses (although it could be adapted to support a similar scheme).…”
Section: Related Workmentioning
confidence: 99%