Proceedings of the 16th International Symposium on High Performance Distributed Computing 2007
DOI: 10.1145/1272366.1272390
|View full text |Cite
|
Sign up to set email alerts
|

High performance and scalable I/O virtualization via self-virtualized devices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
88
0
4

Year Published

2008
2008
2016
2016

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 161 publications
(92 citation statements)
references
References 16 publications
0
88
0
4
Order By: Relevance
“…Several research groups are exploring the performance, potential, and drawbacks of current virtualization technologies for HPC systems [8]. These studies seem to converge to two conclusions.…”
Section: Introductionmentioning
confidence: 87%
See 1 more Smart Citation
“…Several research groups are exploring the performance, potential, and drawbacks of current virtualization technologies for HPC systems [8]. These studies seem to converge to two conclusions.…”
Section: Introductionmentioning
confidence: 87%
“…The second conclusion of ongoing research on virtualization for HPC environments is that, though extensively studied, current paravirtualization frameworks still impose bottlenecks for HPC applications. In particular, frequent context switches between virtual machines and the hypervisor and extensive data copying, hurt the performance of applications with high demands for I/O or communication [8].…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, network access in virtualized environment is still slower than that in the native environment since exits are still generated during the notification and the completion. Many previous studies [5], [7]- [10] show the context switch to be a major source of device virtualization overhead. In addition, since the virtual machine only accesses para-virtualized device drivers to deliver the network request, I/O stacks such as the TCP/IP layer in the guest OS are obviously unnecessary and only increase the virtualization overhead; network packets are handled twice on both the guest and the host because the host should rebuild or modify the packets with the configuration of the physical network, whereas the guest OS build packets with the configuration of virtualized network.…”
Section: Network Virtualization Backgroundsmentioning
confidence: 99%
“…On the contrary, the passthrough approach assigns one physical device exclusively to one VM that has full control and direct access to most parts of the assigned hardware. This has the advantage of significantly reducing the main bottleneck of virtual environments: the overhead of I/O operations [17,18,19,20,21]. Unfortunately, direct device assignment also raises several security concerns.…”
Section: Direct Device Assignmentmentioning
confidence: 99%