CS 244 ’14: An Argument for Increasing TCP’s Initial Congestion Window


Raejoon Jung & Stephen Quinonez

1. Introduction

The problem the original paper is trying to solve is improving the latency of typical web requests. This paper makes an argument for changing the initial congestion window of TCP from three segments to at least ten segments to expedite the completion of web requests.

The motivation behind this idea is that many TCP flows complete before the slow-start phase is complete, coupled with the observation that many browsers circumvent this by opening multiple TCP connections at once. Both of these suggest that increasing the initial congestion window of TCP would be helpful for typical web requests, such as web search queries, which is the type of request used in the original paper’s experiment. The potential impact of this solution would be improving web request completion times for the vast majority of a user’s typical browsing (those involving relatively short TCP flows).

The paper includes several experiments for testing the costs and benefits of increasing the initial congestion window from 3 segments to 10 segments, finding that there were benefits in the average latency of HTTP flows, the latency of low-bandwidth networks, and the average retransmission rate.

2. Subset Goals

In this project, we attempt to recreate and verify the following chart shown in Figure 6 of the paper [1]. This figure shows the improvement in average response latency for queries to Google’s servers, bucketed by the bandwidth of the connection between Google and the user. The chart shows the improvement between the standard TCP initial congestion window of three segments versus the experimental condition of a ten segment initial congestion window.

ImageFig 1. Average response latency for Web search
bucketed by BW at SlowDC in paper [1]

We wondered why would the improvements in response time be different with different bandwidths. If the size of the HTTP response packets are small enough to be sent during the TCP slow start phase, we thought bandwidth would not be a big factor in the improvement.

3. Framework and Subset Results

There were several simplifications made to the network topology for this experiment, due to the fact that the original experiment was not simulated but rather performed on real Google servers responding to actual user queries. In Mininet, we worked on a more simplified framework with a topology which consists a single server connected to multiple users via a single switch. We randomly allocated the users’ link bandwidth based on the SlowDC distribution in paper. SlowDC is the term used in the paper to refer to the data center that serves a subnet with a larger proportion of lower connection bandwidths, with a median bandwidth of 500Kbps, nearly 20% of traffic to subnets with bandwidths <100Kbps, and a median RTT of 70ms. We changed the initial congestion window for the server node and receive window for all nodes, using ip route command in Linux and verified it using wireshark and tcpdump.

For the experiment, each user sends an HTTP request to the HTTP server in the server node. The server sends a request with a payload of a size which we set up. With 3 and 10 segments of initial congestion window, we measure the HTTP request response time from each clients and compare them to see the improvements. To incorporate randomness in real queries, we assign the response size according to the distribution in the paper (Figure 9) and average the amount of improvement.

The figure below is the result of our attempt to reproduce the result in the original paper. As expected, there are improvements in request response time throughout various bandwidth. However, the absolute improvements seems to be stable and the percentage improvement increases as the bandwidth gets larger. We will discuss more in section 6 and 7.

Image
Fig 2. HTTP request response improvements
with increase of initial congestion window from 3 to 10 segments.

4. Challenges

Despite the fact that the paper provided the distribution of bandwidth, RTT, and response size, the paper has not provided the joint distributions of these settings. We suspect that the result is different with the original figure primarily because of the different combination of these parameters. For instance, they mentioned that low bandwidth users tend to have longer RTT. But we had no additional information, so we decided to fix the RTT to 70ms. This allowed us to observe more accurately how increasing the initial congestion window affects clients at different bandwidths.

Also, in terms of implementation, we realized that delays in ip route changes or ARP broadcasts, which irregularly happens in front of the HTTP request, can compromise the result. We solved this issue by adding sufficient idle time (2 seconds) between requests.

5. Critique

The thesis holds well. We were able to reproduce the results shown in Figure 6 with a good degree of accuracy. The one aspect of the graph that did not match was the shape – the paper showed higher improvements in the lower bandwidth buckets than our experiment. This is very likely because, as the paper mentions, lower bandwidth clients are associated with a higher round trip time (RTT) between them and the server, whereas in our experiment, all clients had the same RTT value of 70ms. Thus, the specific shape of the figure depends on the assumption that lower bandwidth clients have a higher RTT. Other than that, the thesis holds very well.

We verified using tcpdump that, with a median response size of 9kB, it only takes 1 round of TCP slow start with initcwnd=10 and 2 rounds with initcwnd=3. So there is an improvement of nearly one round trip time. Note that the improvements are in fact on the order of one round trip time. However, we noticed that with low bandwidth links, the first 10 segments cannot be successfully delivered within one RTT. The delay was dominated by the low bandwidth, resulting in a lower percentage improvement. This explains the trend we notice in the reproduced result.

We conclude that the major factor that makes the difference from our result and the original result is the way we controlled the parameter. The original paper used actual internet environments to observe realistic improvements, whereas we can only observe how initial congestion window and bandwidth interplay under simpler assumptions.

6. Platform

We chose to use Mininet as our platform primarily because it is the platform we are most familiar with. We didn’t want to invest time in learning other ways to reproduce the paper’s results when Mininet was capable. In addition, since Mininet is a software package for network emulation, it is easy to use and distribute compared to a hardware setup. We used Amazon EC2 because it is easily accessible and relatively cheap to use. Furthermore, it will help others reproduce our results by standardizing the hardware running our experiment.

We think that our entire setup is very reproducible. It takes very few steps, and from our own attempts at reproducing our results from a completely blank slate, we have been successful. The main parameter of the setup that could affect reproducibility is the type of EC2 instance used, but we have included in our instructions to use c3.large, as that is what we tested with.

7. README
  • Create a new EC2 instance (c3.large) with the following community AMI: “ami-e2b6ded2”
    • in order to view the results of the experiment, remember to add a custom security rule allowing inbound TCP traffic to port 8000
  • Log into the instance and clone our github repo
  • Run the experiment. The experiment will run for 25-30 minutes.
    • sudo ./run.sh
  • The results of the experiment are plotted in a graph saved as “results.png”
    • To view the results, run ‘python -m SimpleHTTPServer’ and point your web browser at <hostname>:8000/results.png
8. Reference

[1] “An Argument for Increasing TCP’s Initial Congestion Window (with appendix)”: https://developers.google.com/speed/protocols/tcp_initcwnd_techreport.pdf

 

Advertisements

One response to “CS 244 ’14: An Argument for Increasing TCP’s Initial Congestion Window

  1. 5/5 — Your instructions were clear and we were able to exactly reproduce the graph in figure 2. Your comment that the bandwidth experienced depends on the unspecified RTT for each host I believe adequately explains the difference between your results and the original papers’.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s