The growing popularity of virtual machines is pushing the demand for high performance communication between them. Past solutions have seen the use of hardware assistance, in the form of "PCI passthrough" (dedicating parts of physical NICs to each virtual machine) and even bouncing traffic through physical switches to handle data forwarding and replication.In this paper we show that, with a proper design, very high speed communication between virtual machines can be achieved completely in software. Our architecture, called VALE, implements a Virtual Local Ethernet that can be used by virtual machines, such as QEMU, KVM and others, as well as by regular processes. VALE achieves a throughput of over 17 million packets per second (Mpps) between host processes, and over 2 Mpps between QEMU instances, without any hardware assistance.VALE is available for both FreeBSD and Linux hosts, and is implemented as a kernel module that extends our recently proposed netmap framework, and uses similar techniques to achieve high packet rates.
Most of the work on VM network performance has focused so far on bulk TCP traffic, which covers classical applications of virtualization. Completely new "paravirtualized devices" (Xenfront, VIRTIO, vmxnet) have been designed and implemented to improve network throughput.We expect virtualization to become widely used also for different workloads: packet switching devices and middleboxes, Software Defined Networks, etc.. These applications involve very high packet rates that are problematic not only for the hypervisor (which emulates network interfaces) but also for the host itself (which switches packets between guests and physical NICs).In this paper we provide three main results. First, we demonstrate how rates of millions of packets per second can be achieved even within VMs, with limited but targeted modifications on device drivers, hypervisors and the host's virtual switch. Secondly, we show that emulation of conventional NICs (e.g., Intel e1000) is perfectly capable of achieving such packet rates, without requiring completely different device models. Finally, we provide sets of modifications suitable for different use cases (acting only on the guest, or only on the host, or on both) which can improve the network throughput of a VM by 20 times or more.These results are important because they enable a new set of applications within virtual machines.In particular, we achieve guest-to-guest UDP speeds of over 1 Mpps with short frames (and 6 Gbit/s with 1500-byte frames) using a conventional e1000 device, and socket-based sender/receivers. This matches the speed of the OS on bare metal. Furthermore, we reach over 5 Mpps when guests use the netmap API.Our work requires only small changes to device drivers (about 100 lines, both for FreeBSD and Linux version of e1000), similarly small modifications to the hypervisor (we have a QEMU prototype available) and the use of the VALE switch as a network backend. Relevant changes are being incorporated and/or distributed as external patches for FreeBSD, QEMU and Linux.
Supporting network I/O at high packet rates in virtual machines is fundamental for the deployment of Cloud data centers and Network Function Virtualization. Historically, SR-IOV and hardware passthrough were thought as the only viable solution to reduce the high cost of virtualization. In previous work [15] we showed how even plain device emulation can achieve VM-to-VM speeds of millions of packets per second (Mpps), though still at least 3 times slower than bare metal. In this paper, to fill this gap, we present ptnetmap, a virtual passthrough network device (based on the netmap framework). ptnetmap allows VMs to connect to any netmap port (physical devices, software switches, netmap pipes), conserving the speed and isolation of the native netmap system, and removing the constraints of hardware passthrough. Our work includes two key features not present in previous proposals: we provide a high speed path also to untrusted VMs, and do not require dedicated polling cores/threads, which is fundamental to achieve an efficient use of resources. Besides these features, our speed is also beyond previously published values. Running on top of ptnetmap, VMs can saturate a 10 Gbit link at 14.88 Mpps, talk at over 20 Mpps to untrusted VMs, and over 70 Mpps to trusted VMs. ptnetmap extends the netmap framework, and currently supports Linux and FreeBSD guests, and QEMU/KVM host. Support for bhyve/FreeBSD host is under development
Network Function Virtualization (NFV) aims at bringing the benefits of virtualization to network middleboxes (routers, firewalls, Intrusion Detection Systems, ...). In the last few years the NFV use-case, initially hampered by the poor performance of traditional virtualized-I/O and network stacks, has prompted the design of several frameworks, all trying to provide a fast network for VMs and/or containers. These solutions share many common ideas, but also differ in performance, flexibility, portability, amount of specialized hardware required and/or software to be rewritten, attention to energy consumption issues, and so on. In this survey we focus on the NFV data-path, as opposed to the orthogonal control-path. We define a set of desirable features for NFV data-paths and use them to compare a selection of the most promising and/or widely used NFV frameworks. No single solution is optimal for all the features, so our survey may prompt for further research in this area
The rising interest in Network Function Virtualization (NFV) requires Virtual Machines (VMs) to operate with diversified networking workloads, from traditional, bulk TCP transfers to novel ones featuring extremely high packet rates. In response, researchers have explored and proposed new solutions for high performance VM networking, including optimizations to virtual network adapters (such as VirtIO) to support high speed bulk traffic, and alternative frameworks for userspace networking and physical or virtual passthrough. To date, we are still missing a comprehensive solution that supports such extreme workloads across multiple operating systems and hypervisors, while at the same time addressing other requirements such as ease of configuration, operating system independence, scalability and isolation. In this paper we present ptnet, an approach to network I/O virtualization that provides high performance for both traditional TCP/IP and high packet rate applications. ptnet leverages the features of the netmap framework (including virtualization and passthrough support), and defines a simple yet performant network device model that can be easily supported in different operating systems and hypervisors. We prove the effectiveness of our approach by comparing ptnet's performance with one of the state of the art I/O virtualization solutions, namely VirtIO on Linux and QEμKVM. ptnet is available under a BSD license as part of the netmap distributions on github
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.