CS244 ’17: PCC, fairness and flow completion time

By Petar Penkov <ppenkov (at) stanford> and Hristo Stoyanov <stoyanov (at) stanford>


PCC: Re-architecting Congestion Control for Consistent High Performance by Dong et al. [1] is a recently published paper that tries to design a novel approach to TCP congestion control. The authors claim that existing TCP congestion control algorithms are too rigid for the diversity of conditions under which TCP must operate. According to Dong et al. this inflexibility arises from the TCP design family – each algorithm represents a “hardwired mapping” between signals (e.g. loss) and actions (e.g. halve the congestion window). PCC aims to provide consistently high performance across a wide variety of environments, from shallow buffer networks, to satellite links to data centers with many senders to a single receiver. Further, PCC successfully maintains fairness between competing PCC flows.

PCC (Performance-oriented Congestion Control) tries to achieve this by continuously conducting tests on the network and attempting to modify its behavior by choosing an action that will maximize a given utility function based on those observations. Furthermore, PCC identifies that congestion is only one of the possible explanations for a packet loss and therefore does not use loss as a single indicator of congestion. Instead, the utility function additionally takes into consideration throughput and latency. PCC is easy to deploy, as it requires no hardware support, and is very flexible, as it allows configuration of the utility function to express different objectives.


We were drawn to the paper because it explores congestion control, an old problem that still does not have a perfect solution. Part of the reason the problem is difficult is that, as seen in the paper, there is no scarcity in the scenarios to account for. What we liked about the paper is that the authors tackle the problem by identifying a place where the widespread solution (TCP) lacks, the assumption that loss implies congestion, and improving it.


What the authors found was that PCC outperforms specially-engineered TCP algorithms in multiple scenarios on a variety of metrics which proves the merit of PCC and the lacking of TCP in specific scenarios. PCC has faster and better convergence to flow-fairness, as seen in figures 6, 12, and 13, and it utilizes buffers better, as seen in figures 4 and 7. What the latter result implies is that it is possible to build routers with very limited memory. Furthermore, in figure 15 we can see that in the worst-case these results are not achieved in the expense of short flow completion time as, even though PCC is slower than CUBIC, it is only 20% slower in the worst case.

Reproduction goals and Motivation

We were interested in the delay artifacts of PCC. There were two main reasons behind our particular choice of goals. First, these graphs were not the primary goals of the two teams who have worked on PCC in the past, which means our results could supplement theirs to fully cover the paper. Second, we worried PCC might be too aggressive and maintain high RTT, especially with multiple competing flows. In line with Dukkipati and McKeown [2], we believe flow completion time is an important metric for the user experience and therefore this should be the main evaluation metric of PCC. The experimental setup of Figure 6 and Figure 15 (described below) allowed us to easily observe the throughput-delay tradeoff.

We aimed to reproduce figures 6 and 15 in the paper [1]. In our attempt to replicate the results of the paper, we wanted to run our experiments multiple times to create a distribution and observe the span of relative throughput. Furthermore, we wanted to extend figure 15 and observe the behavior of PCC under loads beyond 70%, since the goal of TCP (and PCC) is to utilize the bottleneck link at up to 100%.

Subset Results


We chose to reproduce our results in Google Cloud on Mininet for several reasons. First, Google Cloud images are highly configurable which allows for running the experiment on a variety of machines. Furthermore, the gcloud command line tool enables easy automation and distribution of our experiment over multiple images. Our particular choice of Mininet was dictated by two factors. First, Mininet was both familiar and easy to set up for our particular needs. Second, the original experiment was run in Emulab and running it on Mininet would show if the results are environment-dependent. As described in section Challenges, choosing a good CPU is crucial for reproducing the experiment.

For the RTT Fairness experiment, we set up a topology in Mininet comprised of a switch that connects three hosts, running as a server and two clients. The two clients share the 100Mbps link to the server, and the different RTT is set on their link to the switch. The short RTT flow always has 10ms RTT, while the other flow’s RTT ranges from 20ms to 100ms. For the PCC experiment we run the provided sender and receiver programs and we use their throughput output for our measurements. For the TCP experiments we use iperf that is a conventional tool for measuring bandwidth. The queue size was set to one BDP of the short RTT flow.

For the Flow Completion Time experiment, we set a simple Mininet topology with a client and a server on a 15Mbps, 60ms RTT link. We augmented PCC to report the duration of the flow and we again use the provided sender and receiver programs for our experiment. To evaluate the performance of TCP we set up a simple server and client in Go that report their flow completion time. This choice was made to create a TCP experiment that used a completely independent code base from the code that Dong et al provide. The queue size of the link is again set to one BDP. Additionally, we changed the initial TCP congestion window to 1, as specified in the PCC paper.

RTT Fairness

We conducted the experiment three times in order to compute an interval for the relative throughput and we plotted the minimum, median, and maximum values. Our results revealed that TCP Reno and PCC have very consistent relative throughputs, and these throughputs are very similar to the ones in Figure 6 in the paper. TCP Cubic yielded a wide range of relative throughput values and the result in the paper is a subset of them. Since PCC yields relative throughput consistently closest to 1, this verifies that the algorithm outperforms Cubic and Reno in terms of fairness between flows of different RTT. The figure below shows the result of our experiment, including the computed range.

pasted image 2

Flow Completion Time

Our results in the flow completion time experiment differ slightly from the results in figure 15 in the original paper. The authors have both the PCC and TCP lines starting around 500ms while we found that it is hard to get both to start at that time, as seen in the figure below. We believe that this is due to differences in the way we measure flow completion times for the two protocols. Apart from this difference, our results confirm the overall result from the paper that PCC does not fundamentally harm short flow performance. There is no big variability in flow completion time for network loads under 40%, and for heavier loads PCC and TCP show a gap comparable to the one in the paper.
We run 500 flows, with timing between flows drawn at random from an exponential distribution. We vary the mean of the exponential distribution to achieve a certain network load time. We tried values of the mean at 1/ƛ seconds for values of ƛ between 1 and 20. We measure the network load at the link level, using bwm-ng to measure the load during the time of the experiment.



We faced several challenges that were related to the flow completion time experiment. Measuring flow completion time and doing it consistently between TCP and PCC were two non-trivial problems due to the inherently different way that flows are started. We found that PCC is CPU bound due to busy waiting and therefore a large number of flows can skew our results on a machine with few CPUs. Our most consistent results were achieved on a machine with 4 CPUs.

Further, it was not clear that the experiment runs with an initial TCP congestion window of 1 but it could be inferred from the RTT statistics reported in figure 15. Running the experiment at a fixed network utilization was difficult as a single parametrization of the exponentially distributed time between flows could yield a range of possible network utilization values. For this reason we conducted the experiment with different parametrizations, measured the average network utilization at the data points, and reported our results at these points.

Setting up Mininet was surprisingly challenging and forced us to reproduce on the recommended image for Mininet which was luckily running Ubuntu 14.04, the version used by the author of the paper.


We attempted to reproduce Figures 6 and 15 from the PCC paper. The first figure describes the fairness aspect of PCC with regards to other flows, while the second – the flow completion times of PCC flows. PCC achieves fairness faster than TCP, while maintaining very similar flow completion times. Our experiments confirm both of these results. While making PCC less CPU-intensive by disabling busy-waiting is an area for future work, the protocol yields promising results and has important implications about congestion detection.


This is also available at: https://github.com/petarpenkov/cs244_pcc.

We have found that the easiest way to setup a virtual machine is setting up the gcloud kit. Once it is set up, you can run the following command to create a project and a suitable instance:

gcloud init ;

gcloud compute images create mininet-image –source-uri=https://storage.googleapis.com/mininet/disk.tar.gz ;

gcloud compute instances create pcc –image=mininet-image –custom-cpu=4 –custom-memory=3840MiB ;

The first line will init the gcloud kit and prompt you to create a project. The second line will create a mininet image from our disk.tar.gz file, while the third command will create an instance from the mininet-image. Alternatively, one can use the web UI to create a VM image from the URI at https://storage.googleapis.com/mininet/disk.tar.gz and then create an instance from the image. We recommend at least 4 CPUs and the associated memory quantity.

Further, once instance exists, please run the following three commands:

wget https://raw.githubusercontent.com/petarpenkov/cs244_pcc/master/run-remotely.sh && chmod +x run-remotely.sh && ./run-remotely.sh

The above will run several commands that require ssh and gcloud.

The default password required for ssh login is ‘mininet’ (no quotes). We recommend that this is changed by the team that reproduces these results.

export VM_IP=`gcloud –format=”value(networkInterfaces[0].accessConfigs[0].natIP)” compute instances list -r pcc` ;

ssh mininet@${VM_IP} ;

And once into the VM, execute

screen -dr

To see the progress report, while it runs. See man screen on how to use it.

The results will be ready in about 8 hours. Once the results are done, you can copy the plots to your machine by executing

scp mininet@${VM_IP}:’/home/mininet/cs244_pcc/plots/*’ .

The default password required for ssh login is ‘mininet’. We recommend that this is changed by the team that reproduces these results.

Once the experiments are done, there will be a file named ‘done’ in the home folder of the VM instance. This is created by run-experiment.sh at the end.

We can provide an IP address to a machine that is already set up with all dependencies and scripts. We have not provided that here for security considerations. Please contact us at Hristo Stoyanov <stoyanov (at) stanford> or Petar Penkov <ppenkov (at) stanford>.


[1] PCC

[2] Why Flow-Completion Time is the Right Metric for Congestion Control



One response to “CS244 ’17: PCC, fairness and flow completion time

  1. Peer Reproduction: 5/5. Your instructions were easy to follow and the scripts ran to completion with no issues. We ran the experiments a few times and our figures were quite similar to the ones you created. We did see high variability in the CUBIC throughput, but this was mentioned in the blog post. It would have been nice to see an explanation for this variability.

    We did notice some discrepancies with your reproduced Figure 15 and the original figure for high network loads. The gap between CUBIC and PCC’s 95th percentile performance looks about twice as large, and we didn’t see the PCC and TCP flows for each metric (average, median, 95th) grouped together as closely in the reproduction. The post also mentions extending network load to 100%, while Figure 15 shows flows extended to between 70% and 80%.

    Overall, nice work!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s