MPTCP Wireless Performance


Project Name: MPTCP Wireless Performance

Team Members: Maxine Lim / maxinel@, Josh Valdez / joshuav@

Key Results: We demonstrated MPTCP’s increased performance over multiple wireless interfaces as compared to TCP for a range of buffer sizes and confirmed MPTCP’s ability to perform seamless wireless handoff.

Sources:
Raiciu, C., Paasch, C., Barre, S., Ford, A., Honda, M., Duchene, F., Bonaventure, O., et al. (1995). How Hard Can It Be ? Designing and Implementing a Deployable Multipath TCP.

MPTCP: MPTCP Linux Kernel Implementation

Padhye, J., Firoiu, V., Towsley, D., & Kurose, J. (1998). Modeling TCP Throughput : A Simple Model and its Empirical Validation 2 A Model for TCP Congestion Control. ACM SIGCOMM, 1-23.

Introduction
A primary motivation for using MPTCP is to enable hosts to take advantage of WiFi and 3G networks via multiple interfaces simultaneously for a given flow. While TCP can only use one path for a flow, MPTCP can split up a flow to explore many paths and consequently produce higher bandwidth. This method allows for overall faster data transfer and smoother handoff when changing networks.

In the paper by Raiciu et. al, Figure 9 shows how MPTCP can take advantage of both WiFi and 3G networks, consistently outperforming TCP. This figure shows the average goodput for a MPTCP used over real WiFi and 3G compared to TCP used over WiFi and 3G separately, over a range of buffer sizes. As the buffer size increases, MPTCP is able to more effectively exploit the subflow through the extra interface. This is the primary figure we replicate in this paper.

Raiciu et. al also show a video demonstrating a handoff using MPTCP over Ethernet, WiFi, and 3G, also shown below. They have a screensaver application sending data using MPTCP over all three interfaces. The bulk of the bandwidth goes onto the Ethernet link, since it has the best throughput. When the Ethernet link is taken down, we see the traffic migrate to the WiFi link, which has the second best throughput, though some of it goes to the 3G link as well. When WiFi is turned off, all of the data is sent through the 3G link, and the screensaver application no longer runs as smoothly, showing the lower throughput due to only the 3G link being available. As WiFi and Ethernet are turned back on, the total throughput resumes its previous value and the application smooths out. We will also be replicating this MPTCP handoff as a verification of our initial MPTCP setup.

In the rest of the post we cover our experimental methods, results, and lessons learned, including ways in which our results differed as well as directions for future replication.

Methodology
Setup
We ran the experiment using Mininet on an MPTCP-enabled kernel. To set up the test environment we compiled a custom MPTCP kernel on an Amazon EC2 c1.medium instance running Ubuntu 12.04. The version of MPTCP is commit 3f087b2f1bb35d48d090258615ea76a07d4f446c from the git repository: git://mptcp.info.ucl.ac.be/mptcp.git. Next we installed Mininet and configured the IP routing tables for multiple interfaces within our test scripts using ifconfig.

Link Configuration
The Ethernet links were configured to perform at a maximum of 10Mbps with a 1ms delay and 0% packet drop rate. We had initially set it to be 100Mbps but reduced it because it more clearly showed differences between link throughputs that otherwise might have been too subtle to pick up on.

The 3G and WiFi links were capped at 2Mbps with delays of 75ms and 5ms, and drop rates of 2% and 3% respectively. These values were suggested by KK Yap, a Ph.D student and wireless authority at Stanford. Originally we had wanted to set the WiFi link with a drop rate of 5%, as well as add jitter. However we found that either a drop rate above 3% or adding jitter to the links would often prevent the sender and receiver from communicating at all, so settled for more conservative values without jitter.

Topologies
The topology we used to replicate the handoff consisted of one sender, one receiver, and a set of three switches and is shown in Figure 1a. Three links connected the sender to each of the switches, and three additional links connected each of the switches to the receiver for a total of six links. The set of links that connected the sender to the switches were configured to be Ethernet links. One of each of the links connecting the switches to the receiver was configured to be an Ethernet, 3G, and WiFi link. To replicate Figure 9, we first removed the Ethernet switch to test MPTCP, and further removed the 3G or WiFi switch and links for TCP as shown in Figure 1b and 1c respectively.

Wireless Handoff
To ensure that MPTCP was working correctly, we first attempted the wireless handoff, as shown in Figure 2. We used iperf to send data over the links and used bwm-ng to measure throughput. We wanted to show that all three links would initially be fully utilized, and that the utilizations would change as we turned off and on the different links. As expected, initially when they are all up, Ethernet gets the highest throughput, followed by WiFi and 3G. At the 7 second mark Ethernet is turned off, making its throughput drop off. At the 14 second mark WiFi is disabled, leaving only 3G. At 21 seconds, WiFi is re-enabled, and at about 28 seconds, Ethernet is re-enabled. Both links shortly recover their initial throughput.

MPTCP Wireless Performance
After we verified that MPTCP was exhibiting the expected behavior, we moved on to our main replication, Figure 9 from the paper, which shows MPTCP performance compared to TCP over real 3G and WiFi across a series of buffer sizes. After modifying our topology as described, we took throughput samples beginning 10 seconds after start to allow for the throughput to level off and continuing over a period of 30 seconds. We note that the paper uses goodput as the main measurement, though we used throughput instead. To vary receiver and sender buffer sizes, we modified the net.core.wmem_max, net.core.rmem_max, and net.ipv4.tcp_rmem options in the /etc/sysctl.conf file to reflect the buffer size being tested.

Our first set of test runs used the same set of buffer sizes as the paper, ranging from 50-500KB. The results are plotted in Figure 3 against the original plot from Raiciu et. al.

Results
Our two important results are demonstrating the functionality of wireless handoff and more importantly, showing MPTCP’s performance improvement over TCP in utilizing multiple paths across wireless links.

Wireless Handoff
Although our methodology for the wireless handoff was significantly different than the one shown in the video, we were able to show that the transition to and from different links would work properly. As mentioned, the approximate shape of our graph is as expected, although in our plot we see some additional fluctuations in throughput. Since we used the handoff primarily to verify that MPTCP was functional, we do not provide a deeper discussion of the handoff.

MPTCP Wireless Performance
Our primary result displayed in Figure 3 indicates that by exploiting multiple interfaces, MPTCP provides higher throughput than TCP on either interface alone, though it requires larger buffers to reach its full potential.

In our results as in the original, TCP over WiFi achieves close to full bandwidth across all buffer sizes. However there are two distinct differences between our graph and the original.

First, while in the original paper TCP over 3G has about a 1.2Mbps throughput at the lowest buffer size and is able to obtain full bandwidth at higher buffer sizes, in ours TCP over 3G never seems to get much higher than 0.5Mbps throughput, even with large buffers. Raiciu et. al attribute the initially lower 3G throughput to a longer round-trip-time as a result of a small buffer. We note that the buffer required by the formula RTTxC for this link comes out to be around 40KB, so these sizes are much larger already. Nonetheless we also measured its throughput with an even larger 1000KB buffer and it still did not significantly surpass the maximum throughput shown in Figure 3.

However, we did suspect that round-trip-time might be limiting throughput. We therefore measured throughput for TCP over a link with fixed 2% packet loss, using a buffer size of 500KB with varying delay. We plotted these data points against points derived from the TCP throughput equation from Padhye et. al, which models throughput as a function of round-trip-time and packet loss rate:

In this equation B(p) is the throughput, RTT is the round-trip-time, b is the number of packets acknowledged by a received ack, and p is the packet loss rate. Figure 4 shows that our curve falls slightly below that of the model but is a near match in shape. The beginning is flat compared to the model because the throughput for this link was capped at 2Mbps. The throughput starts to drop off around 20ms and flattens out around 60ms, when it resembles the values achieved in Figure 3. Since delay clearly has a strong effect on throughput, we attribute the main difference in throughput for our 3G link to a difference in round-trip-time between our 3G link and the real 3G network used by the original paper.

The second difference between our plot and the original is MPTCP does not achieve quite as good throughput in our results. The original paper shows MPTCP’s goodput ranging from 2Mbps-3.25Mbps while our range is from 2Mbps-2.5Mbps. One likely factor is the reduced 3G throughput that we just discussed, which may also limit MPTCP throughput. In general the differences in our link properties can also cause different results. We acknowledge that our emulation of WiFi and 3G links are primitive and unlikely to accurately reflect real networks, so we cannot expect identical results. Additionally, as previously noted, we are measuring throughput rather than goodput which may alter results. Finally despite the slightly reduced throughput, we were still able to see how MPTCP was able to achieve increasing performance with larger buffers as described in the original paper as well as how it was able to consistently outperform TCP.

Lessons Learned
The most significant challenges we faced were setting up the testing environment with MPTCP and emulating wireless. We spent a nontrivial amount of time compiling MPTCP kernels with the correct options. Furthermore, simple mistakes in our setup were hard detect due to lack of experience and documentation. For example during our initial set up we were seeing surprisingly low throughput for some basic links and thought it might be due to excessive debug output, but it turned out that we made a simple mistake in our routing table configuration. Using EC2 did not generally pose challenges except in easily resolved issues such as selecting the appropriate instance size–we found that an instance at least as powerful as the c1.medium instance we used to be essential in getting the expected throughput. Learning how to correctly configure link properties such as the sender and receiver buffer sizes also posed challenges later in testing.

Additionally, we found wireless emulation to be expectedly difficult. There are a number of link properties to consider, among them maximum bandwidth, delay, packet loss rate, and jitter. To determine values for our links we consulted KK Yap, a Stanford Ph.D student and wireless authority in Nick McKeown’s group. He provided us with estimates for approximate delay, loss rate, and jitter settings for our links, but informed us that setting these simple parameters would be unlikely to be precise at all. As mentioned we ended up not adding jitter, though according to Yap the addition would have only made subtle differences in our results. A more accurate wireless model written into a tc queuing discipline might give more exact replication of the original graph but this effort was infeasible given the time constraints and the lack of detail about the network and link properties used in the original paper. These difficulties all contributed to discrepancies in our results.

Instructions to Replicate This Experiment
Make a new c1.medium instance using our custom MPTCP kernel AMI listed in the US East region under ‘mptcp-nodebug-12.04’. The AMI also has Mininet installed by default. Once you’ve created  the instance, ssh into it and run the commands below:

git clone git://github.com/ExxonValdeez/mptcp-wireless-performance.git
cd mptcp-wireless-performance
sudo ./mptcp_sweep.sh

Running the script will run twelve instances of our topology for varying buffer sizes. The main results for our purposes are included in each of the *.txt files in the form [3G throughput, WiFi throughput, Total throughput]. For TCP, one of the first two will be zero.

Take these results and graph them to get a similar figure to ours. How to format the data depends on the software you use. For our results, we made columns for total throughput, link type, and buffer size in a data spreadsheet. We copied in the values for throughput into the throughput column and filled in the link type and buffer sizes accordingly. We used SPSS and made variables for each of our columns, then used the Chart Builder to create a clustered bar chart with error bars. Any comparable software should be able to produce similar results.

6 responses to “MPTCP Wireless Performance

  1. I was able to successfully replicate their results. The setup is relatively painless, just a git clone since they already have everything installed on their AMI. The experiment takes a long time to run, and in the end the output is .txt files. Plugging the resulting data into a spreadsheet and generating a bar graph gave me the following graph, which resembles their results quite closely. The only thing it lacks is error bounds, though running the experiment that many times would have taken well over an hour.

  2. Is it possible to provide estimates of the approximate delay, loss rate, and jitter settings for your links?

  3. Could you please post another link with the files used for the experiment? The github account has been removed. Thank you !

  4. Hi, can you please repost the link with the files used for reproducing the experiment? I am afraid the github account has been removed.
    Thank you, Evelina

Leave a comment