Team: Vimalkumar Jeyakumar, Nikhil Handigol, Brandon Heller, Bob Lantz.
Key Result(s): Without adequate performance isolation, Container-Based Emulators such as vEmulab and the original Mininet show unreliable network throughput. Adding CPU and link bandwidth limiting yields consistent performance.
- Mike Hibler,Robert Ricci,Leigh Stoller,Jonathon Duerig,Shashi Guruprasad, Tim Stack, Kirk Webb, and Jay Lepreau. Large-scale virtu- alization in the emulab network testbed. In USENIX 2008 Annual Technical Conference on Annual Technical Conference, pages 113–128, Berkeley, CA, USA, 2008. USENIX Association.
- B. Lantz, B. Heller, and N. McKeown. A network in a laptop: Rapid prototyping for software-defined networks. In Proceedings of the Ninth ACM SIGCOMM Workshop on Hot Topics in Networks, page 19. ACM, 2010.
Container-based emulation is a popular and effective method of prototyping and testing network systems designs. Without adequate performance isolation, however, it can be unsuitable for conducting experiments where performance fidelity, rather than simple functional fidelity, matters. In particular, systems like CORE and Mininet make no attempt at maintaining performance fidelity, while vEmulab achieves performance fidelity by scaling across multiple nodes and only isolates network bandwidth when running on a single machine. Trellis notes that CPU isolation is possible, but we were unable to find results demonstrating it.
This test does not reproduce a particular research experiment, but rather a microbenchmark which examines the bandwidth between a sender and receiver (using iperf) in the presence of “time-waster” processes (“while(1)” loops in bash) on a set of unrelated hosts in the same simulation.
The test was run on two emulators: virtual Emulab, on the Utah Emulab testbed, requesting a set of ten virtual nodes (which were automatically placed by Emulab’s assign system on a single 800 MHz PC) and Mininet, creating a network of ten nodes on a single PC (a 2.4 GHz Core2 laptop.) Note that we chose the FreeBSD nodes in Emulab to closely match the original Emulab paper, while Mininet ran on Ubuntu Linux on the same hardware used in the original Mininet paper.
Figure 1 plots the TCP bandwidth in a simple benchmark where two virtual hosts communicate at full speed over a 200Mb/s link. In the back- ground, we vary the load on a number of other (non-communicating) virtual hosts. On Mininet without Mininet-HiFi’s performance isolation features, the TCP flow exceeds desired performance at first then degrades gradually as the background load in- creases. Though vEmulab correctly rate-limits the links, that alone is not sufficient: increasing back- ground load affects the network performance of other virtual hosts, leading to unrealistic results. Ideally, the TCP flow would see a constant throughput of 200Mb/s irrespective of the background load on the other virtual hosts.
Turning on CPU and link bandwidth limits in Mininet-HiFi yields the “ideal” line with correct bandwidth irrespective of unrelated system load.
Instructions to replicate this experiment:
git clone git://bitbucket.org/nikhilh/mininet_tests.git cd emulab
- Create a new emulab instance based on emulab-config.ns using a FreeBSD image. (Note we used a FreeBSD node because we wanted to compare against the original virtual Emulab implementation and paper which used FreeBSD.)
- Install iperf into the FreeBSD image.
- Run emulab-test.sh
- Install Mininet-HiFi onto your test system (we used a 2.4 GHz Core2 laptop)
- Run emulab-test.py
- It should produce results with (cfs) and without (none) CPU and bandwidth isolation.
The results can be plotted using emulab-plot.py.