Explaining Packet Delays under Virtualization
Résumé
This paper studies the impact of virtualization on round-trip time (RTT) measurements by conducting controlled experiments on Linux-VServer and Xen, two popular platforms for virtualization. This is an important and timely topic given the increasing use of virtualized machines in both production networks (e.g. Amazon EC2) and research testbeds (e.g. PlanetLab). The measurements are carefully designed and insightful. The results not only reinforce similar previous results but also shed more light on the potential root causes. Some interesting new findings include: (i) heavy network traffic from competing virtual machines can introduce significant delay to RTT measurements, (ii) most delay is introduced while sending packets (as opposed to receiving packets). The paper also discusses the implications of these findings and proposes a feedback based mechanism to avoid measurement bias in virtualized environment. While some of these findings may require further investigation to fully understand their root causes (e.g. by more heavily instrumenting the virtualization platforms), they are clearly useful results to keep in mind when performing RTT measurements in virtualized environment.