CS244 ’17: Compiling Path Queries


Dominique Piens and Omar Rizwan

Goals

In “Compiling Path Queries,” Narayana et al. introduce a new declarative query language for interactive whole-network debugging. They compare it to what SQL does for querying databases. In their implementation of the query language, queries are compiled to deterministic finite automata and then to match-action table rules, which lets switches efficiently store query state inside packets. For example, network operators could track packets along a path as they leave the source and as they enter the destination.

The authors emphasize that compilation must be fast and that the emitted rules must be small enough to fit on switches. They claim that because path queries meet this standard, you can debug your network with something like a REPL; network administrators don’t need to carefully plan and batch their measurements in advance.

Motivation

No existing system provides a declarative whole-network query language which is fast, accurate, and uses standard SDN hardware. Some query systems rely on lower-resolution spatial or temporal sampling, which only captures some subset of packets. Others can analyze data flowing through some point, but cannot keep track of a particular packet as it moves through the network. These restrictions lead to slower, less expressive, or less accurate SDN debugging. Datacenters’ profit margins are impacted by reliability, so any tool that can enable network administrators to better understand or fix their network provides a competitive business advantage.

They develop a concrete example in section 4 of the paper: a network operator might want to diagnose whether a tenant’s performance problem is due to an end-host or the network. Interactive debugging allows the operator to isolate the cause of the performance problem by experimentation and elimination. For example, the operator can write queries to look at expected and real packet delivery counts between two points. The query language makes it much easier to test hypotheses and find the source of the problem.

Results

The query language and compilation algorithms are extensions of existing software, namely:

  • Pyretic, a Python SDN programming toolkit and OpenFlow controller
  • The fast NetKAT compiler in Frenetic: NetKAT is a SDN policy language which can be quickly compiled to match-action tables.

Narayana et al. also use mininet to test the emitted path queries on simulated networks. For the paper, they did compilation and simulation on a Xeon E3 server with eight 3.7 GHz CPU cores and 32 GB of RAM.

To achieve interactivity, the authors developed six incremental optimizations, which cut compile times by three orders of magnitude. With queries running on the order of seconds on small networks or tens of seconds on large networks, the authors argue that the system is fast enough for interactive use.

Additionally, the authors study how compile times and match-action rule counts change as the number of network nodes increases. They show that for up to 160 nodes, compile times are ‘reasonable’ (tens of seconds), and the number of rules fits within a typical switch’s TCAM memory.

Subset Goal

We aimed to reproduce Figure 9, which shows how compile times and rule counts of path queries scale with the number of nodes on synthetic ISP topologies. The data points shown are averages for six queries compiled on five network topologies for each fixed number of nodes.

Screen Shot 2017-06-03 at 4.37.22 PM

The original Figure 9 from the paper.

Subset Motivation

Figure 9 is the proof that path queries are viable on near-arbitrary network topologies and for a range of network sizes. The trends suggest a linearizable range from 20 to 120 nodes where the system is interactive and deployable. Network operators considering the use of path queries would likely use Figure 9 to evaluate its feasibility in the network. In other words, path queries’ scalability sufficiently determines the system’s value. A replicated figure 9 with similar trends would demonstrate the system is replicable (machine and software-environment independent) and practical.

Subset Results

Up to 120 nodes, the replicated figure 9 looks very similar to the original figure 9. Compile times are slightly higher (within a factor of two), but the trend lines are similar. Differences in compile times could be attributed to the use of Python instead of Pypy or the difference in hardware. Rule counts look identical. Not shown here, the number of DFA packet bits match those in the paper, as the number of rules do. From our replication experiment, it appears the path query system is indeed replicable.

all_queries

Replicated figure 9 up to 120 nodes

Challenges

Understanding the system and components

The path queries language sits atop a complex system of various components from the Frenetic/Pyretic groups, as well as some external libraries. The paper describes this in a short, reference-filled paragraph. Without a prior background, it was hard to understand the function and relevance of each component. In particular, we found the relationship between Frenetic and Pyretic here confusing: there is significant overlap between the two projects as a whole (controllers, compilers, languages), but the path query system uses specific pieces of each, which don’t overlap that much.

Screen Shot 2017-06-03 at 4.30.55 PM

Simplified diagram of the system.

This was especially difficult without a known-good environment in which to run path queries; many times, we didn’t know if we’d missed some component, or if the VM just did not work out of the box (both were true at times).

Dependencies

The corresponding author has a page about replication, which links to a particular VM, but the link is broken, and the author indicated that retrieving it would not be easy. We explored getting path queries running on the Frenetic example VM (which would have the NetKAT compiler built in) and the Pyretic 0.2.2 example VM (which would have the Pyretic controller and library). Either way, we’d need to install some additional dependencies. In addition, the whole Pyretic project was abandoned last year, so newer versions of other parts of the system often break it.

We ultimately used the Pyretic VM, so we will focus on dependency challenges there. Two main components of the system were missing: Frenetic and Ragel. Installing Frenetic for path query compilation required a careful choice of version for compatibility.

The authors used the PyPy Python JIT to run their experiments, but we were unable to get it to work, so we ran the experiments with Python instead of PyPy. We tried the master branch of the Pyretic GitHub repository instead of the path_queries branch, because master seemed to work once dependencies were installed, while the author had forgotten to check some files into the path_queries branch.

But while running on master, we noticed two-fifths of experiments were failing in the core Pyretic library. The lead author explained that the error was due to packet headers being too large to fit within the limit: we realized that the path_queries branch did not even contain the function generating the errors. After switching to the path_queries branch and adding the missing files from master, the experiments all run to completion.

Getting proactive Pyretic controller working

At the moment, we can run path queries in mininet simulation, but only in interpreted mode, where all switching decisions go through the Pyretic runtime. A better test of the generated code would be proactive mode, where Pyretic acts as an OpenFlow controller and propagates the compiled match-action rules.

In proactive mode, we are currently able to compile and send path-query rules to switches, and they look reasonable (correct rules, correct number of rules). But no packets get delivered: some other rule overrides the rules we add, or there is some format incompatibility. The lead author helped troubleshoot this issue; we think there are lingering problems with our Frenetic version (3.4.1, from 2014). We have not been able to get the controller working fully; we tried other Frenetic revisions in git (from times the author suggested), but there we encountered incompatibilities with the OCaml ecosystem or with our Pyretic version.

Long run times

The author indicated that the test suites can take several days to complete. Tests run one at a time, so we could not parallelize them on a single machine; even though we get an 8-core machine from Google Cloud, we only really use one of the cores. It was difficult to iterate on our setup and make sure things were working, because these tests took so long to run. Although we could spend days running tests, and we’re fairly confident that we would replicate their results for higher node counts as well, we wanted results that other people could replicate easily, so we chose a subset (networks up to size 100) of the results in the paper to present.

Impact of hardware

Running the experiments in the paper multiple times, there were persistent memory errors which led to missing data points. The missing points would systematically bias our statistics, so we had to be very careful not to include results with any test failures in this report. We tried increasing the memory of the Google Cloud instance to 6.5 GB, and changing the kernel’s memory overcommitting policies. This resulted in a run time of about 52 hours to replicate figure 9 from the paper. However, there were still memory errors starting at synthetic network node size 100. We were finally able to replicate the experiments of figure 9 without errors by using a Google Cloud instance with 52 GB of memory.

Thesis

The thesis — that a language like this may be useful for interactive debugging — still seems reasonable; it hasn’t been conclusively proven or disproven. Many of the issues we encountered (and mostly solved) were logistical issues, where we could not find the right software versions to get the original system to work. We did notice, when running the tests in different ways and analyzing their output, that the nature of the query greatly impacts how practical it is to compile and run that query. In practice, what kind of queries would people want to write? Would they be slow queries or fast queries? Also, is waiting half a minute or more for a query really interactive enough? That seems like a human factors question.

Extensions

Error bars

For the original paper, the authors reran some tests up to 5 times, even though they only present average numbers in their results. Because the tests ran slowly for us, we wanted to know whether the reruns were necessary, so we made box plots to show the variance between runs, and we put error bars on the overall plot of scalability trends. We found that the path_loss test was an outlier: it had high variance in compilation time between runs.

Effect of memory

Running into memory errors and increasing memory in steps reveals the interaction between the machine’s memory, the path query compiler, and the query compilation workload. We first tried to reproduce Figure 9 with 6.5 GB of memory and one CPU core. Memory errors occurred for node counts above 100 and for two of the tested queries: path_loss (localize packet loss on multiple multi-hop paths between hosts) and congested_link (determine which flows use congested links). For node counts above 100, average compile times were not significantly affected, but rule counts were lower by a factor between 2 and 10, and decreasing due to the missing queries. The effects of the query type on path query compilation are explored further below. Overall, it appears a machine with low memory can compile almost all queries up to about 100 nodes, and simpler queries for all node counts. Having one core instead of eight appears to have little impact on average path query compile times up to 100 nodes.

all_queries

Figure 9 replicated with one core and 6.5 Gb of memory

Performance by query

Plotting compilation times and rule counts as a function of the number of nodes for each query provides a more detailed view of how the complexity of different path queries scales with the network. As suggested by the memory errors above, congested_link and path_loss queries compile significantly slower as number of nodes grows. Above 40 nodes, path_loss seems to be linear with a slope of about 25 rules per node, and with large variance. Plotting the boxplot for the compilation time and rules counts of the path_loss query, we note that the variability is monotonic with node count. The compilation times increase with network size in a linear (firewall, slice), and at least quadratic (path_loss, ddos) manner. The congested_link query increases to a maximum for 60 nodes, then decreases.

rules_by_query

Number of ingress rules as a function of number of nodes plotted by query

compilation_by_query

Compilation times as a function of number of nodes plotted by query

Statistics for path_loss query

This suggests much larger networks can use path queries if the majority of their queries are different from path_loss. Further work would be needed to better understand how queries like path_loss could be further optimized, and why congested link appears to have decreasing time complexity as network size increases.

Python instead of PyPy

We replicated Figure 9 almost identically using Python instead of PyPy. Absolute compilation times were very close to those in the original paper. The choice between the two does not seem to impact path query compilation time. The author indicated that they had not tried the standard Python at all, so this is a new result.

The underlying machines here are different, to be clear: we have Google Cloud’s virtualized 2.2 GHz-2.6GHz Xeon E5 cores, while they ran on a bare metal 3.7 GHz Xeon E3 server. Even so, if we assume that these machines are within an order of magnitude of each other, the PyPy JIT doesn’t seem to help much with the query compilation workload, even though it’s a JIT instead of an interpreter. It’s possible that most of the work is done in NetKAT, Ragel, and other external tools, and Python is not a major bottleneck for this system.

Platform

We’ve built a custom VM image, based on the Pyretic 0.2.2 starter VM from the (abandoned) Pyretic website. (That VM looks to itself be an extension of a mininet VM.) We found that the Pyretic starter VM actually does not work out of the box! It expects the Frenetic compiler to be available, but that compiler isn’t pre-installed. We installed Frenetic (and the OCaml compiler and libraries it requires), as well as Ragel and other dependencies, and we munged the branch to path_queries and changed some broken hard-coded paths. It doesn’t seem like the Pyretic starter VM we downloaded was ever tested by anyone. We also migrated the machine from VMware to Google Cloud.

We then wrote a script, pyretic-clean-to-path-queries.sh, which takes that Pyretic 0.2.2 VM and installs everything that’s needed to replicate the path query tests as we have. We froze the resulting disk and made a VM image (which is on public Google Storage), so anyone can replicate the results just by making a VM from that disk image. That VM has all the necessary packages already installed and has the tests in place, ready to execute — it shouldn’t even need to go on the Internet — so as long as there are still virtual machine monitors that run our VM, at least the trendline of our results should be reproducible. (Things like absolute compile times still depend on hardware.)

You do need a big instance with tens of gigabytes of RAM to get the best results, but you only need to run the instance as long as the test takes, and you can use a preemptible instance to save money. The frozen VM image makes testing ideas much easier, and lets us avoid wasting time: we often ran subsets of the test suite in parallel on separate instances, because it is so easy to spin up a new, working path query system from our image.

README

See our GitHub repository.

You’ll import our VM image, then use the imported image to make a virtual machine, then use our script to run the tests on the machine, then use another script to plot the results.

Import the image

First, you’ll import our premade virtual machine image (which has the path query system already set up) into your Google Cloud project.

  1. Log into the Google Cloud console in your browser. You should have accepted the terms of service, set up a project, enabled billing, and wired the project to a billing account — in other words, you should be in a state where you can make a new instance.
  2. Click on Compute Engine on the left-side menu, then go to the Images section.
  3. Click Create Image in the toolbar at the top.
  4. Let Name be pyretic-path-queries, and change Source to be ‘Cloud Storage file’ instead of Disk. Leave the Optional fields blank.For the Cloud Storage file, type in path-queries/myimage.tar.gz, which is the public URL for the image we made. (We based it on the Pyretic 0.2.2 VM on their website. We took it and installed extra packages like the Frenetic compiler for path query tests and subsequent analysis.)
  5. Click Create. Wait for the operation to go through; the new pyretic-path-queries image should show up in the Images list.

Create the virtual machine instance

  1. Go to the VM instances section of the Compute Engine console. Click Create to go to the ‘Create an instance’ screen.
  2. Name the new instance path-queries-replicator.You should probably choose a zone in a region where you don’t have any existing VM instances: if you already had a VM instance in us-west1-c, for example, then you shouldn’t pick any zone in us-west1 here. We’re going to spin up an eight-core VM, which is likely to reach your region cap for Google Cloud.Choose machine type n1-highmem-8, which has 8 vCPUs and 52 GB of memory.Change the boot disk: in the boot disk screen that pops up, go to ‘Custom images’ and select the pyretic-path-queries image.If you want, you can go into the ‘Management, disk, networking, SSH keys’ section and make the instance preemptible to save money, although then you risk having to resume it if it gets terminated.
  3. Click Create. Wait for the operation to complete. When you get an External IP for the new instance in the list of instances, you’re ready to run the tests.

Run the tests

  1. Clone this repository on your own computer (not on the VM instance). Go to the replicate subfolder.
  2. Run ./replicate-in-vm.sh EXTERNALIP where EXTERNALIP is the external IP of the instance you just made.You may get prompted to approve a key fingerprint (say yes) or to enter a password for user mininet on the machine (the password is mininet) when each command runs.

If something goes wrong, you can try ssh-ing into mininet@EXTERNALIP and delete the path-queries folder, then rerun the replicate script. At worst, you can make a new instance.

  1. You should see the scalability trend tests running, starting with run 1 of igen_delauney on a 20-node network. These tests attempt to compile various path queries on increasingly large synthetic networks (up to 100 nodes); they track how many rules are emitted and how long the compilation takes.The tests should take about 5-6 hours to complete.Note that the tests run inside screen over ssh. If you lose connection, they’ll keep running on the instance. You can see a full log in ~/path-queries/screenlog.0 on the VM, and you can ssh into the VM and do screen -x to reattach to the test terminal if you lose the original session.

Plot the results

Once the tests have all completed (signals: replicate-in-vm.sh terminated successfully, screen no longer exists, the screenlog.0 shows that a test ended and no further tests ran), we can view the results.

Go to the replicate subfolder on your computer. Run ./get-results.sh EXTERNALIP. The result plots will appear in the replicate/results subfolder.

Make sure you turn off the giant VM instance when you’re done using it!

Extensions

If you want to run different tests, edit test/rep_tests.sh before running replicate/replicate-in-vm.sh. You can add different network sizes to NUM_NODES_ARR, for example, or you can remove some of the OPT_FLAGS. You can try the Stanford or enterprise scripts.

The test scripts from the authors, which our scripts call into, are all on the VM instance, in~/pyretic/pyretic/evaluations/scripts/nsdi16. Each test writes raw output data to the folder~/pyretic/pyretic/evaluations/evaluation_results/nsdi16/QUERYTYPE_NUMNODES_OPTIMIZATION_RUN/ depending on its parameters.

Feedback

Google Cloud cut off our CS244 billing account one day last week and terminated all our VMs. I couldn’t find any explanation about why, and I didn’t get any notice of this: I just saw in the morning that they were all down. I moved them onto my billing account and brought them back up, but we found it disturbing, especially considering how long some of our tests can take.

The process of migrating VMs from VMware to Google Cloud, and then from a Google Cloud account into the public and back into another Cloud account, was painful. You need to tar this disk image and round it up to a gigabyte, and you need to run dd inside the VM sometimes and have a temporary disk, and it seems like it could be much more straightforward.

References

Narayana, Srinivas, Mina Tahmasbi, Jennifer Rexford, and David Walker. “Compiling path queries.” In 13th USENIX Symposium on Networked Systems Design and Implementation (NSDI 16), pp. 207-222. USENIX Association, 2016.

Smolka, Steffen, Spiridon Eliopoulos, Nate Foster, and Arjun Guha. “A fast compiler for NetKAT.” In Proceedings of the 20th ACM SIGPLAN International Conference on Functional Programming (ICFP 15), pp. 328-341.  ACM, 2015.

Reich, Joshua, Christopher Monsanto, Nate Foster, Jennifer Rexford, and David Walker. “Modular SDN programming with Pyretic.”

Advertisements

One response to “CS244 ’17: Compiling Path Queries

  1. We ran their code on a machine on Google Cloud with 8 CPUs and 52GB RAM. The instruction for setup the experiment is clear and straight forward. We successfully replicate the result after 4 hours. The experimental result closely mimic ones described in the blog post. We give 5/5 for the replication score.

    The blog post identifies two important problems in the use of path
    queries – the time taken to compile and whether the rules can fit into
    SDN routers. Furthermore, in addition to replicating the chosen figure,
    the extensions presented provide a cogent analysis which helps
    illuminate the underlying compilation behavior of the different query types.

    The authors of the post also present a clear breakdown of the the system and components of the path query system, explain their choices of experimental setup, and analyze some difficulties they faced and solved in replicating the result. We found this useful in understanding the intricacies in replicating the original work.

    Overall, thanks for presenting this exciting work.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s