On the joys of the test harness

This is the story of my favourite kind of programming.  The kind of programming where you have a handful of existing tools, a bunch of data, and you need to make it all fit together somehow.  There's no neat-shrink-wrapped-sofware-download-it-from-the-web way to do it (none of this stuff was meant to go together), so you start hacking your tools, abusing data formats, work in multiple languages, and otherwise get round pegs to slide smoothly into square holes.

Today's fun comes as a result of the need for a test harness to drive automated tests for the Processing.js parser.  I tell it in large part to help my students see how one leverages open technologies and formats on the way to solving otherwise unsolvable problems.  There's no button in Visual Studio for what we're about to do here.

The story is set within the Processing for the Web project, which is making a JavaScript version of Processing.  The past few days we've been trying to track down some bugs related to the parser.  Right now, there is a clever hack in place to take Processing code, and using a series of regular expressions, turn it into pure JavaScript.  It's a nice bit of code, and works pretty well turning the Java-style syntax into something the browser can understand.  It's also very brittle.

What we really need here, and what Processing itself uses, is an antlr based grammar to build our parser on top of JS.  Processing basically subclasses the Java antlr grammar, and we could do the same thing, only use the JavaScript target.  Probably doing this in parallel to a parser-lite version in the browser, so that you could choose to create your pure JS Processing code once, and then include that in your web pages.  I've filed a bug, and some unsuspecting student is going to get the chance to work on it soon...

But!  before you can go reimplementing a parser, and before you can properly fix regressions in the current one, you need a proper test suite.  We really need a simple way to take valid Processing code, and make sure our parser produces valid JavaScript.  And we need it now, so we don't break more code while we fix these bugs.  I debated making it a student project for this term, but Al convinced me that we needed it for the next release.  So today I wrote a test harness.

The problem is actually pretty simple, but automating it required some thinking.  I have a parser written in JS that is meant to be run in a web page, and I need to take lots of Processing code and run it through, some of which I want to pass, some of which I want to fail, etc.  In order to automate this, I really want to decouple this from the browser, and do it from the command-line.

A lot of these problems are already solved in the Mozilla context.  With Firefox we have a whole slew of testing frameworks and tools to drive our various automated tests.  To run a lot of these, we don't need a full browser, just a JS shell that can execute our JS code and report to stdout.  I spoke with Ted today about our issues, and he recommended modeling it on how xpcshell tests work (i.e., js + python + makefile).

The first step was to take a directory of Processing files and turn them into something I could feed into a test harness running in the jsshell.  The jsshell can run the Processing.js parser, since it's just JS, and as long as I can feed the Processing code in, I'm good.  There's even a handy load() function I can use to get files into the jsshell.  However, what it loads has to be valid JS and Processing code is anything but.  I needed a way to trick the jsshell into loading the Processing code in so I could get it into the parser and tested.

I chose to use python, and decided to write it such that it would scan the test dir for all the files, and turn them into JSON strings, which I could pass on the command-line to the jsshell executable (i.e, ./js -e '{code: "x =7;\ny=5;\n...}'), along with my test harness scripts.  I then wrote my test harness in JS, which also loads the Processing.js code (i.e., the parser), and then I basically do this:

try {  
  eval(Processing(canvas, processing-code));  
  _pass()  
} catch (e) {  
  _fail()  
}

The Processing function takes a canvas element (where the graphics will get drawn), and the Processing code, and tries to parse it.  What it gives back is pure JS, so we can use eval() to try and execute it.  If that works, the generated JS is good, so our test passes; if not, we fail.  But there's another problem: the Processing code assumes there's a canvas element and a DOM--basically, it assumes it's running in a web browser as part of a page.  But we're nowhere near a web browser when this runs, so we have to fake it.

The nice thing about JS is that you can create and modify objects, data, functions, whatever on the fly.  In my case, I needed to fool the Processing.js parser into thinking that it had access to various DOM functions and objects (e.g., document, setInterval, etc.).  My solution was to create dummy functions and object literals that could stand-in for the real thing.  The parser doesn't actually need to call this code; it just needs to be there and non-null.  Here's my setInterval implementation:

var setInterval = function() {};

Using this trick I was able to slowly write all the bits of the fake DOM I needed to get the parser running.  As I've tried running a few other larger tests, I've noticed other bits of the DOM I need to implement, but it's pretty trivial to add them as I go.

I've filed a bug and am waiting for some feedback before a I fix a few more things and get it checked-in.  Once I do, I plan to find some other students to write a ton of tests to try and break the parser even more, and then file bugs so we can fix them.  Writing tests may not be the most fun you can have, but honestly, writing a test harness is pretty close.

Show Comments