Mini-Stanford Backbone

Team: James Hongyi Zeng and Peyman Kazemian.

Key Results: A replicated, mini-Stanford backbone in Mininet-HiFi, with real topology and real configuration. Experimenters may use it to verify the network connectivity in different scenarios.

Source: Original research

Contacts: James Hongyi Zeng (, Peyman Kazemian (


It is notoriously hard to debug networks. Every day network engineers wrestle with router misconfigurations, fiber cuts, faulty interfaces, mis-labeled cables, software bugs, intermittent links and a myriad  other reasons that cause networks to misbehave, or fail completely. Network engineers hunt down bugs using the most rudimentary tools (eg. ping, traceroute, and tcpdump), and track down root causes using a combination of accrued wisdom and intuition. Debugging networks is only becoming harder as networks are getting bigger (modern data centers may contain 10,000 switches, a campus network may serve 50,000 users, a 100Gb/s long haul link may carry 100,000 flows),  and are getting more complicated (with over 6,000 RFCs routers often process more than ten encapsulation protocols simultaneously, router software is based on millions of lines of source code, and network chips often contain billions of gates). Small wonder that network engineers have been labeled “masters of complexity”.

Mininet-HiFi provides a unique opportunity to tackle the network troubleshooting problem. In this project, we replicated the Stanford backbone network in Mininet-HiFi. We used Open vSwitch (OVS) to emulate the routers, using the real port configuration information, and connected them according to the real topology. We then translated the forwarding entries in the Stanford backbone network into equivalent OpenFlow rules and installed them in the OVS switches. We used emulated hosts to send and receive test packets to “probe” the network status. The graph below shows the part of network that is used for experiments in this section.


Below, we present different test scenarios and the corresponding results:

Forwarding Error: To emulate a functional error, we deliberately create a fault by replacing the action of an IP forwarding rule in boza matching on dst_ip = with a “drop” action (we called this rule R_1^{boza}). As a result of this fault,  test packets from $boza$ to $coza$ with dst_ip= failed and were not received at coza. The table below shows two other test packets that are used to localize and pinpoint the fault. These test packets shown goza-coza and boza-poza are received correctly at the end terminals. From the rule history of the passing and failing packets, we can deduce that only rule R_1^{boza} could possibly cause the problem, as all the other rules appear in the rule history of a received test packet.

Congestion: We detect congestion by measuring the one-way latency of test packets. In our emulation environment, all terminals are synchronized to the host’s clock so the latency can be calculated with a single time-stamp.

To create congestion, we rate-limit all the links in the emulated Stanford network to 30Mb/s, and create two 20Mb/s UDP flows: poza to yoza at t=0 and roza to yoza at t=30s, which will congest the link bbra-yoza starting at t=30s. The bottom left graph next to yoza  shows the two UDP flows. The queue inside the routers will build up and test packets will experience longer queuing delay.
The bottom right graph next to pozb shows the latency experienced by two test packets, one from pozb to roza and the other one from pozb to yoza. At t=30s, the $bozb-yoza$ test packet experience a much higher latency, correctly signaling congestion. Since these two test packets share the bozb-s_1 and s_1-bbra links, we can conclude that the congestion is not happening in these two links, therefore we can correctly infer that bbra-yoza is the congested link.

Available Bandwidth: We can also monitor available bandwidth. For this experiment, we use Pathload, a bandwidth probing tool based on packet pairs/packet trains techniques. We repeat the previous experiment, but decrease the two UDP flows to 10Mb/s, so that the bottleneck available bandwidth is 10Mb/s. Pathload reports that $bozb-yoza$ has an available bandwidth of 11.715Mb/s,  $bozb-roza$ has an available bandwidth of 19.935Mb/s, while the other (idle) terminals report 30.60Mb/s.  Using the same argument as before, we automatically conclude that $bbra-yoza$ link is the bottleneck link with 10Mb/s available bandwidth.

Priority: We create priority queues in OVS using Linux’s htb scheduler and tc utilities. We replicate the previously “failed” test case $pozb-yoza$ for high and low priority queues respectively. The graph below shows the result.

We first repeat the congestion experiment. When the low priority queue is congested (i.e. both UDP flows mapped to low priority queues), only low priority test packets are affected. However, when the high priority slice is congested, low and high priority test packets experience the congestion and are delayed. Similarly, when repeating the available bandwidth experiment, high priority flows perceive the same available bandwidth whether we use high or low priority test packets. But for low priority flows, the high priority test packets correctly perceive the full link bandwidth.

Lessons Learned

Mininet-HiFi is an extremely easy way to conduct network research. This project demonstrates that we can replicate the entire Stanford backbone in a single Mininet instance.

Instructions to Replicate This Experiment



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s