Current cloud computing research is focused on providing a scalable,
on-demand, clustered computing environment. One of the major
challenges in this field is bridging the gap between virtualization
techniques and high performance network I/O retrieval techniques. This
study endeavors to address this seemingly contradictory issue.
Virtualizing physical components requires hardware-aided software
hypervisors to control I/O device access. These hypervisors provide
abstract methods for Virtual Machines to utilize the available
resources. Data access in
HPC is conducted via User-level Networking
and OS-bypass techniques, through which one may achieve high bandwidth
and low-latency communication between nodes. As a result, line-rate
bandwidth in 10GbE interconnects can only be realized by alleviating
software overheads imposed by the virtualization abstraction layer,
namely the driver domain model, the hypervisor, and so on.
Previous work has also concluded that the integration of
virtualization semantics in specialized software running on Network
Processors can isolate and finally minimize the VM hypervisor
overheard associated with device access.
We design a framework in which Virtual Machines
efficiently share network I/O devices bypassing overheads imposed by
the hypervisor. Specifically, our framework allows VMs to optimally
exchange messages with the network via a high performance NIC, leaving
security and isolation issues to the hypervisor. This mechanism,
however, necessitates hardware support in order to provide packet
matching on the NIC; for instance, packet delivery to unprivileged
guests is realized without the intervention of the VMM or the driver
domain.
Myrinet NICs offload the messaging protocol processing on the
interface and provide an aspect of virtualization semantics. This
feature allows us to integrate these semantics in the Xen netfront /
netback drivers and present a mechanism to transfer messages
efficiently between VMs and the network. With
MyriXen, multiple
Virtual Machines that reside in one or more Xen VM containers use the
MX message passing protocol as if the NIC was assigned solely to them.
The driver domain model holds physical to unprivileged virtual
address mappings and vise-versa. The privileged guest runs a netback
driver, which can communicate with the VMs via multiple event
channels. A netfront driver residing in each VM is responsible for
initializing each transfer, while protocol processing runs on the NIC
itself as in the standard model. All data transfers between the VM and
the NIC are performed via a direct data path using the mappings
provided by the driver domain model. In this way, the hypervisor and
the privileged guest are only aware of the initiation and the
completion of send or receive events. Thus, the hypervisor, the driver
domain model, and the netback / netfront drivers are omitted from the
critical path that affects throughput and latency.
Publications