CS244 ’17: Confused, Timid, and Unstable: Picking a Video Streaming Rate is Hard

Team: De-An Huang (ID: dahuang), Jiwei Li (ID: jiweil)

Paper: Te-Yuan Huang, Nikhil Handigol, Brandon Heller, Nick McKeown, Ramesh Johari, “Confused, Timid, and Unstable: Picking a Video Streaming Rate is Hard”, ACM SIGCOMM Internet Measurement Conference (IMC), Boston, Nov 2012

Introduction and Goals

Video streaming alone accounts for over 50% of the peak download traffic in the US. To provide high-quality user experience, it is important to pick the correct video streaming rate: when it is too higher, the viewer will experience annoying rebuffering events; when it is too low, the video quality will be too bad. The key challenge is that most content is transferred through standard HTTP from CDN server. This force the rate selection and bandwidth estimation to happen at the client, which is hard to do accurately on the client side above the HTTP layer.

In general, popular video streaming services, such as Hulu, Netflix, and Vudu, thus use a conservative rate selection algorithm. This works well when there is no competing flow. However, when there is a competing flow the conservatism lead to a feedback loop that leads to very low throughput of the video stream: 1. lower throughput (initially caused by competing flow); 2. select low video rate because of conservatism; and 3. reduce video segment size of request, which in turns lead to lower throughput. This is called downward spiral effect.

The goal of the original paper is to understand this phenomenon and to be able to select a video streaming rate, in the presence of other competing traffic, to give the viewers best viewing experiences.


We are particularly interested in this problem due to the human proportion of download traffic that video streaming accounts for. Getting a better understanding of the mechanism behind downward spiral effect points out deficiencies of current algorithms and suggests avenues for future improvement.

Result from the Paper

Here, we give an overview of the results presented in the paper.  Figure 4(a) of the paper shows the client’s video throughput in the absence and presence of competing TCP flows. One can clearly observe that in the absence of competing flows, the throughput is close to the bottleneck, with the video rate being 1750kb/s. Once the competing flows are added, the throughput immediately plummets to very low value (235kb/s), and stays at the low value for most of the time until the competing flow stops. This is a bit counter-intuitive since the two flows should share the capacity, each taking a bandwidth 2.5Mb/s.


Figure 4(a) in the paper. The authors showed that downward spiral effect is visible in popular services

To understand why this happens, the authors first confirm that the available bandwidth is indeed available for streaming video. This is confirmed by an experiment, in which a client is forced to play a video at the rate of 1750b/s if it picks a lower rate. And this forcing strategy does not cause rebuffering events, which means that the downward spiral effect is caused by the underestimation of the available bandwidth in the client’s rate selection algorithm.          

The authors further examine each of the clients. When there is no competing flow, the client chooses the highest playback rate. This fills up its playback buffer and the bottleneck link is fully occupied. This leads to the client to enter a periodic on-off sequence: the client requires s new 4-second segment whenever the buffer is not full; during the 4-second OFF period, the TCP congestion window times out.

When a competing flow comes in, the competing flow filled the buffer during the OFF period, and the video flow sees very high packet loss. The segment is finished before cwnd climbs up again, and the system re-enters the OFF period. The process will repeat for every ON-OFF period, leading the throughout to always be slow.  The throughput perceived by the client is less than the equal size for the smaller file size. The algorithms that select video rate tend to be conservative. When selecting a lower video rate, the requesting segment size would also be smaller, which further worsen the throughput perceived by the client. And this feedback loop continues.

Subset Goals to Reproduce

Our goal is to reproduce the custom client results (Figure 20 to 23) in the paper. This includes the following subgoals:

  1. Reproduce the downward spiral effect with the conservative custom client. (Figure 20)
  2. Show that conservatism is really the cause by implementing the less conservative client and show that it indeed improve.  (Figure 21)
  3. Show that improved filter for bandwidth estimation can lead to better video streaming throughput. (Figure 22)
  4. Study the effect of varying segment size. (Figure 23)

Figure 20: Custom client that reproduce the downward spiral effect


Figure 21: Custom client with less conservatism (10%), downward spiral effect is not as severe


Figure 22: Custom client with improved filter, which further alleviates downward spiral effect


Figure 23: Custom client with 5x segment size

Subset Motivation

We pick to reproduce the results of the custom client because the parameters of the algorithm are all available in the paper. This is unlike observing the services A, B, and C, where the algorithm of rate selection is not available. It is important that we can control the rate selection algorithm so that we can further verify the effect by changing its parameters. In addition, this also allows us to experiment with different parameters for improvement.

Subset Results

As shown in Figure 20-ours, we are able to reproduce the downward spiral effect of the custom client from Figure 20 in the original paper. We do observe that when a competing flow is added (~200 sec when the buffer is full), the video rate and the throughput immediately plummets to a very low value (750 kb/s), and stays at the low value until the end. 


Figure 20-ours: Video rate and throughput when competing flow is present. We successfully reproduce the downward spiral effect Conservatism is set to 40%. The filtering strategy is averaging filter.

We are able to reproduce the result of Figure 21 in section 6.3. When trying a less conservative algorithm with conservatism rate set to 10% (as shown in Figure 21-ours), we can see that the video rate is higher (remaining around 1400 kb/s) than when conservatism rate set to 40%, as shown in Figure 1. This is true even though the playback buffer stays full.


Figure 21-ours: Our custom client with 10% conservatism. We reproduce the observation in Sec. 6.3 of the paper, where less conservatism can improve the video rate.

In Figure 22-ours, we are able to reproduce the result of Figure 22 in the original paper by using a better filtering mechanism. We use the 80th-percentile of measured rate of the past ten segment download. By comparison to Figure 21-ours, where we adopt an averaging filtering mechanism, we can see that variation is greatly reduced and the majority of the movie plays at the highest available rate (1750 kb/s). Only a small fraction of the time, the video rate is the second best rate (1400 kb/s)


Figure 22-ours: Our custom client with 10% conservatism and with and 80th-percentile filter. This further improves the video rate from just having less conservative client.

Finally, we reproduce the result of Figure 23 in the original paper, by using a larger segment size (5x) on top of the custom client in Figure 22-ours. As shown in Figure 23-ours, the video rate stays at the highest (1750 kb/s) in our experiment. The video throughput is also smoother because of the larger segment size. This is consistent with the observation in the original paper. Also note that the buffer also goes to a lower value in this case.


Figure 23-ours: We further verify the result of changing the segment size to 5x and show that this can lead to improved video rate.


In this assignment, we decided to use Mininet to reproduce the results of the custom client. This already removed the challenge of setting up the interface with existing video streaming services, such as Netflix, Hulu, Vudu, Youtube. We think the remaining challenges are figuring out how to simulate the behavior of video streaming client and competing flow in Mininet. Simulating the competing flow as in the paper is challenging because they use an open-ended byte-range request to a large video file. As the experiments last for more than 1000 seconds, we then need a large video file for this purpose. We decided to follow the tutorial of Mininet and use iperf to simulate a video flow as competing flow. For setting up the client, we follow our set up in assignment 1, but with RangeHTTPServer to handle byte range request. Overall, the paper is very clear on the parameters and settings, so we are able to reproduce most of the results.


We think the main thesis of the paper holds well based on our simulated reproduction. This is supported by the progression from Figure 20-ours to Figure 23-ours, where the video rate gradually improves as we address the causes of the downward spiral effect proposed in the paper. Our main critique would then be on going further in the direction of a more aggressive video streaming client. As shown in our figures and the figures in the original paper, the playback buffer stays almost full for most of the time (almost 4 minutes of content). This means that rebuffering is far from happening. This also indicates the possibility of a more aggressive video streaming client. However, we also agree that this can be the result of a simulated environment. In the real world, there can be much more competing flows, and the bottleneck link can also have varying throughput, especially in mobile/wireless cases.


Figure 20~23 are produced by incrementally modifying 1. conservatism, 2. rate filter, and 3. segment size. In this case, it is hard to compare the contribution of each of the improvements, while we do observe adding all three of the approaches lead to the best result. We extend the results in the paper by further examining the effect of rate filter and segment size independently from other modifications.

In Figure E1, we show the result of just changing the filter from moving average to 80th-percentile. We can see improvements from Figure 20-ours, but this is not as effective as changing the conservatism as shown in Figure 21-ours.


Figure E1: We further independently examine the effect of modifications. This figure shows the effect of just changing the rate filter. This is better than Figure 20-ours but not as good as Figure 21-ours.

In Figure E2, we show the result of just increasing the segment size to 5x. Out of the three modifications, we can see that it gives the best result.


Figure E2: The result of just increasing the segment size to 5x. This gives the best result out of the three modifications mentioned in the paper.


As mentioned in the challenges, we use Mininet to simulate the results in the paper. Our topology contains three hosts (server, the video streaming client, and the competing client). We think the setup is quite reproducible with Mininet on a similar hardware (The figures are generated using VirtualBox Mininet used in assignment 1). The system has three input parameters, i.e., conservatism rate, filtering mechanism, and segment size.  Actually, all of the parameters that we set up have significant impact on the video rate. The most significant one is the segment size as shown in the extension experiments.


Our figures are generated using VirtualBox Mininet in PA1 ( http://web.stanford.edu/class/cs244/vbsetup.html ). In the VM, do:

  1. Clone git repository: “git clone https://daahuang@bitbucket.org/daahuang/cs244-pa3.git
  2. Install RangeHTTPServer: “sudo pip install rangehttpserver”
  3. Generate all figures (Figure 20~23): run “sudo ./run_all.sh”. The results will be copied to results/fig-2x.png. Our results are results/fig-2x-ori.png. Each figure takes around 1000 sec.

* In case of errors, run “./clean.sh” to reset Mininet and kill processes

Alternatively, we provide instructions on running our code on google cloud instance:

  1. Setup google cloud instance with the following options
    1. Go to “Compute Engine” => “VM instances”
    2. Zone: us-west1-b
    3. Machine Type: 2 vCPUs, which has default 7.5 GB memory
    4. Boot disk: Click “Change” and change to “Ubuntu 14.04 LTS” with default 10GB disk
    5. Firewall: Check “Allow HTTP traffic” and “Allow HTTPS traffic”
  2. Connect to google cloud instance with option “–zone=us-west1-b”
  3. Run “sudo apt-get update” and then “sudo apt-get install git” to get git
  4. Clone git repository: “git clone https://daahuang@bitbucket.org/daahuang/cs244-pa3.git
  5. Run setup code: “sudo ./setup.sh”
  6. Generate all figures (Figure 20~23): run “sudo ./run_all.sh”. The results will be copied to results/fig-2x.png. Our results are results/fig-2x-ori.png. Each figure takes around 1000 sec.

One response to “CS244 ’17: Confused, Timid, and Unstable: Picking a Video Streaming Rate is Hard

  1. 5/5. The results were easy and straightforward to reproduce. Took just over an hour to spin up the PA1 VM and reproduce the plots. Each plot was reproduced with high precision; only minor distinguishable differences were apparent on figure 23, likely caused by a difference in network conditions during our trials and the authors’. We also attempted to reproduce these results over Google Compute Engine. It took just over 1.5 hours to set up a fresh Google Cloud Instance in accordance with the provided guidance and gather the results. We should note that script may have encountered a problem when running the trail for two of the figures (20 & 22); the resulting plots showed a video throughput that nearly matched the buffer status instead of oscillating well below. We imagine this had more to do with the environment we were using and not the script itself, as the other two figures were reproduced with high precision. In this case it was great of the authors to provide multiple means for reproduction.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s