CS244’13: Proportional Rate Reduction for TCP


Yilong Geng (gengyl08@stanford.edu)
Bo Wang (bowang@stanford.edu)

Introduction
Packet losses can greatly increase latency for web users. Responses experiencing losses can last 7-10 times longer than those without losses. When packet losses occur in a TCP transmission, fast recovery kicks in  to help speed up the retransmission of lost packets. Proportional rate reduction (PRR) is a new method of fast recovery that recovers from losses quickly, smoothly and accurately by pacing out retransmissions across received ACKs. Specifically, PRR improves upon the existing fast recovery under a number of conditions including burst losses and losses near the end of short flows. 

The PRR algorithm determines the number of segments to be sent per ACK during recovery to balance two goals: 1) a speedy and smooth recovery from losses, and 2) end recovery at a congestion window close to ssthresh. PRR has two main parts. The first part, the proportional part is active when the number of outstanding segments is larger than ssthresh, which is typically true during the recovery and under light losses. It gradually reduces the cwnd clocked by the incoming ACKs. The second part of algorithm begins when the number of outstanding segments is smaller than ssthresh (such as due to excess losses or application stalls during recovery). The algorithm then attempts to inhibit any further cwnd reductions. Instead it performs slow start to build the number of outstanding segments back to ssthresh.

Original Results in the Paper
In the paper, the authors obtained following experimental results.

  1. Statistics of pipe vs. ssthresh at the start of recovery
  2. Values for cwnd vs. ssthresh just prior to exiting recovery
  3. Comparisons of cwnd, time spent in recovery, retransmission statistics, timeouts, and TCP latency for PRR, RFC 3517 and Linux
  4. India YouTube video transfers loss recovery statistics

Validation Targets
In this project, we aim to validate the results in the PRR paper. Since the India Youtube network environment is hard to characterize and we don’t have access to Google’s internal TCP testing tool used in this paper, we set our validation targets as followings.

  1. Validate the change of cwnd upon packet loss match the expected behavior.
  2. Validate the latency improvements of PRR versus Linux original fast recovery mechanism.
  3. Validate PRR’s performance in both light and heavy loss environments.

Experiment Setup
To reproduce the results of PRR, we chose Mininet as the simulator. Since PRR has been incorporated into Linux kernel since 3.2, we ran the experiments with both Linux kernel with PRR (Ubuntu 12.04 with kernel 3.2) and without PRR (Ubuntu 11.10 with kernel 3.0). We initially planned to compile and use the Linux kernel right before and after the PRR commit. However due to the problem of compiling openvswitch we had to resort to this current approach. Detailed reasons can be found in the Challenges section. The newest version of Ubuntu, running Linux kernel 3.5, has more improvements on PRR which is not documented in the paper. We decided to go without that so that the results on the paper can be reproduced more accurately.

Our experiments use the simple server-switch-client topology. The client fetches web pages of different sizes from the server under different link conditions, congestion conditions and Linux kernel versions (with/without PRR). The focus of our experiments is how PRR would reduce the delay of a short web page flow with packet loss. We simulate this scenario by setting the link loss rate in Mininet.

To observe the behavior of PRR in fast recovery, we use a single iperf flow to probe the bottleneck link which triggers the packet drop once cwnd is large enough.

To better understand the relationship between delay and packet loss rate, we also did experiments by setting different packet loss rates.

Results

1. Fast Recovery Behavior with / without PRR

To observe the effect of PRR, we first look into the cwnd dynamics of the two Linux kernels. In Mininet on each kernel, we instantiate a long iperf flow from the server to the client. In the experiment the maximum queue length is 100 and the bottleneck bandwidth is 1.2Mbps.

Image    Image

From the two figures above we can see the two main parts of PRR. First, PRR gradually reduces the congestion window clocked by the incoming acknowledgments. The algorithm is patterned after rate halving, but uses a fraction that is appropriate for the target window chosen by the congestion control algorithm.

If pipe becomes smaller than ssthresh (such as due to excess losses or application stalls during recovery), the second part of the algorithm attempts to inhibit any further congestion window reductions. Instead it performs slow start to build the pipe back up to ssthresh subject to the availability of new data to send.

2. A Comparison of TCP Latency with / without PRR

This experiment focuses on the improvements PRR brought about for TCP latency in case of packet losses. We set the link loss rate to 1% in both environments and fetch each file 1000 times. We chose to test it 1000 times because we are gathering the average and standard deviation of the abnormal cases (i.e. those with packet losses), which tend to appear not often and have a relatively large variation. The problem here is it takes a long time to finish 1000 tests (around 9 hours). To make it faster for others to run the test, we reduced the number of tests to 100 in the script so that it can finish in around 50 mins. However, in this situation the variance is a bit large.

Among these fetches, a small amount of them will suffer from packet losses and high latency. We calculate the average and standard deviation for each of the case. Then those flows outside the two sigmas are identified as outliers. We plot these two groups (flows with few or no packet losses and flows with packet losses) in bar charts as follows.

Image    Image

From the figures we can see that the latency in case of packet losses with PRR is significantly lower than that without PRR. The average latencies of those flows with few or no packet losses are relatively closer to each other. However, for those cases with almost no packet losses, the latencies are pretty much the same. This demonstrates PRR’s effect in reducing latency with packet losses.

3. PRR’s Effects on Different Loss Rates

Now let’s examine how packet loss rate would affect the delay of web page transmission with and without PRR. In the last section we did our experiment under a link loss rate of 1%. Now let’s see what will happen under a link loss rate of 3% (a loss rate greater than 3% would make the simulation too slow to run).

Image    Image

In the above figures, each bar is the average latency of both with and without packet losses. We can see that without PRR, the delay increases dramatically when we increase the link loss rate to 3%. With PRR, the delay also increases but not as much at it without PRR. This suggests that PRR helps under both light loss and heavy loss, especially when link loss is high.

Challenges

1. Compiling openvswitch on a customized kernel
The biggest challenge for our project is to get openvswitch compiled and running on a customized Linux kernel. To precisely compare the effect of PRR, we decided to checkout the Linux kernel source code right before and after the PRR commit. Then compile these two kernels and install them in a Ubuntu system. We have successfully compiled and installed the new kernel and headers. However, the openvswitch (installed as a module in Linux) cannot be compiled by complaining a redefinition error as follow.

We also tried to replace the compiled customized ipv4 modules in an original Ubuntu’s library, i.e. /lib/modules/`uname -r`/kernel/net/ipv4. However, the system complains the inserted modules cannot be loaded. After exhausting all these approaches, we finally resort to running experiments on an original Ubuntu 12.04 and an original Ubuntu 11.10. Ubuntu 12.04.1 uses a Linux 3.2 kernel which has PRR integrated; while Ubuntu 11.10 uses a Linux 3.0 kernel without PRR integrated. We understand that there are more changes besides PRR between the network modules in these two versions of kernels, which can lead to imprecision of experimental results, but due to the limited time and understanding of openvswitch, this appears to be the most feasible way to go.

2. Collect internal information of TCP
Another challenge we encountered is the tool to collect the traces. In Mininet TCP_probe is used to obtain the trace, which only outputs the following information: Time seconds, Sender address:port, Receiver address:port, Bytes in packet, Next send sequence #, Unacknowledged sequence #, Congestion window, Slow start threshold, and Send window. However, some information internal to TCP are missing, such as the time spent in recovery, pipe, and ssthresh. Thus some results in the paper cannot be reproduced due to lack of information. In the PRR paper, the authors mentioned that they were using a TCP testing tool internally developed at Google.

3. Lack of real traffic testing environment
In the paper’s experiment, the authors tested PRR in India YouTube video transfer. This real testing environment is precious and unavailable in our tests. We tried to mimic the situation with link loss rate in Mininet.

Instructions to Replicate the Experiments

1. Create two VMs on EC2 for Ubuntu 11.10 and 12.04
    Choose c1.medium as the instance type. Images for both VMs can be found in Public AMIs as follows.
       Ubuntu 11.10
       [Name] ubuntu/images/ebs/ubuntu-oneiric-11.10-amd64-server-20130203
       [ID]       ami-13ba2d7a
       Ubuntu 12.04
       [Name] ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20120424
       [ID]       ami-a29943cb

2. Install Mininet on both new VMs
     sudo apt-get install git curl
     git clone git://github.com/mininet/mininet
     mininet/util/install.sh -a

3. Install and run testing program
     git clone git://github.com/gengyl08/cs244_pa3.git
     cd cs244_pa3
     sudo ./run.sh

4. Interpret the generated graphs
    In each VM, there is a test_output_$timestamp folder. You can find the the following files in it:

  • tcp_probe.png: Figure of cwnd of a single flow
  • result_loss1.png: Latency at link loss rate of 1%
  • result_loss3.png: Latency at link loss rate of 3%
Advertisements

3 responses to “CS244’13: Proportional Rate Reduction for TCP

  1. We note that the AMIs are in the N. Virginia cluster (us-east-1) which is important to know. Setup was quite simple beyond that and worked as described.

    The provided code produced mostly matching graphs of cwnd with and without PRR (Part 1) and approximately matching graphs of delay with link loss rate 1% (Part 2). Delay with retransmission without PRR was significantly higher in our results for 75KB and significantly lower for 750KB — these bars almost matched while the provided plot shows 750KB more than double 75KB.

    The graphs in Part 3 were not produced at all. The code produced graphs for loss rates 1% and 3% separately and no produced plot directly compared delay at loss rates 1% and 3%.

    Our graphs are available at https://www.dropbox.com/sh/kbutdsr3qizpdw3/EAwVPAwYya

    3

    • Thanks for your reply! We made a little mistake that the figure in part 3 is not put into the result directory but in the main directory. Can you find it there? We’ve already fixed that bug and it should be working fine now. And to get stable result actually we need to run the experiment for a much longer time by increasing the number of samples in run.sh. It would take several hours to run so we produced a light-weight version of run.sh for you to test it. If you have time you could take a try.;) Thanks!

      • The loss.png files were indeed present in the main directory of the experiment. They have been added to the Dropbox link above. The no-PRR graph seems to show the opposite trend (.75KB and 750KB are essentially reversed in our graph versus the graph in Part 3), but the PRR graph shows essentially the same trend.

        I recognize that there will be some noise to the data with fewer repetitions, but purposely just went with what was provided. (As an aside, my AWS account thanks you for not making the simulation take 9 hours.)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s