Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing 2010
DOI: 10.1145/1851476.1851534
|View full text |Cite
|
Sign up to set email alerts
|

Providing a cloud network infrastructure on a supercomputer

Abstract: Supercomputers and clouds both strive to make a large number of computing cores available for computation. More recently, similar objectives such as low-power, manageability at scale, and low cost of ownership are driving a more converged hardware and software. Challenges remain, however, of which one is that current cloud infrastructure does not yield the performance sought by many scientific applications. A source of the performance loss comes from virtualization and virtualization of the network in particul… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2011
2011
2018
2018

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 28 publications
(16 citation statements)
references
References 29 publications
0
16
0
Order By: Relevance
“…Wang and colleagues have extended Hadoop to take advantage of InfiniBand [3]. Similarly in spirit, works by Jose et al [4] and by the authors themselves [5] have investigated how a distributed memory cache can exploit HPC interconnects such as InfiniBand or those of the Blue Gene/P supercomputer for better efficiency. LibRIPC strives to generalize such efforts both with respect to the cloud workloads running atop and the hardware architectures running underneath.…”
Section: Related Workmentioning
confidence: 99%
“…Wang and colleagues have extended Hadoop to take advantage of InfiniBand [3]. Similarly in spirit, works by Jose et al [4] and by the authors themselves [5] have investigated how a distributed memory cache can exploit HPC interconnects such as InfiniBand or those of the Blue Gene/P supercomputer for better efficiency. LibRIPC strives to generalize such efforts both with respect to the cloud workloads running atop and the hardware architectures running underneath.…”
Section: Related Workmentioning
confidence: 99%
“…When applied 100% CPU, they observed a 26.5% CPU usage for Dom-0, which is a similar result we got in our experiments. There are also other works that try to minimize the I/O overhead imposed by virtualization, for instance, Appavoo et al [18] presented a mechanism to improve performance of network access in virtualized machines, and Wei et al [19] investigated the use of dedicated Xen domains to handle I/O operations. Liu and Abali [20] proposed the Virtualization Polling Engine (VPE) that uses dedicated CPU cores to help with the virtualization of I/O devices by using an eventdriven execution model with dedicated polling threads.…”
Section: Related Workmentioning
confidence: 99%
“…We have verified our implementation with Kittyhawk Linux, a BG/P version of the Linux Kernel [4] with support for BG/P's hardware devices. It also provides an overlay that maps standard Linux Ethernet communication onto Blue Gene's high-speed collective and torus interconnects [3]. Our VMM allows one or more instances of Kittyhawk Linux to run in a VM.…”
Section: Initial Evaluationmentioning
confidence: 99%
“…Figure 7 shows the results. Our virtualization layer poses significant overhead to the Ethernet network performance (which is already off the actual performance that the torus and collective hardware can deliver [3]). We are confident, however, that optimizations such as para-virtual device support can render virtualization substantially more efficient.…”
Section: Initial Evaluationmentioning
confidence: 99%