Experiments with audio, part V

I’m working on a project to try and expose audio spectrum data from Firefox’s audio element.  Today I ponder arrays and fix some things.

Today is my last day of work before the Christmas break begins, and I just submitted my final grades.  This gave me a little bit of time to rewrite some of my code in order to fix a few things.  Here's my latest demo:

Raw sound data from <audio>, second test

After talking with some more people who work with audio, I realized that I'd be better off giving floats vs. integers for the data.  In this demo you can see both my code, which uses event.mozAudioData, and the output in Firebug.  I'm pretty happy with our progress to date.  Twelve days isn't bad to go from clueless to slightly-less clueless and with a working patch.  I want to draw attention to this, because a large part of why I'm doing this is to inspire my students and others to take risks and work on things that scare them.  You hold yourself back if you don't.  I don't know what I'm doing most of the time, I just keep at it until I do.  There's no reason you can't, too.

Anyway, this is looking good, but I'm far from done.  What I spent the most time on with this iteration was getting my arrays working properly.  I've been doing a lot of reading in our code, looking for ways to expose pure arrays in content (e.g., in the DOM).  In the comments of my last blog post, a few suggestions where given, and in the bug another was mentioned.  Here are the ways I've come-up with (there might be others):

  1. Create a DOM class that wraps your array and provides getter/length semantics (that's what I've done for now).  See nsIDOMClientRectList.
  2. Use an nsIVariant type, and let XPConnect coerce the type for you.  See nsAnnotationService::GetPageAnnotationNames.
  3. Go around IDL altogether and grab raw JS values.  See nsCanvasRenderingContext2D::GetImageData.
  4. Use the (currently not landed) WebGL Array (Vlad recommends I move to this, so I'll explore that in subsequent work).
    Now that I have it running, I'm starting to think about performance vs. just getting it to work.  A lot of questions I am getting relate to doing Fourier Transform in JS vs. navtive code.  I've been around Mozilla long enough to know that the first answer to questions like this isn't to drop into C/C++.  Maybe it won't be possible in JS; but I need to have that proven before I'll make that conclusion.  I'm also concerned that my time-based events for exposing the data are going to be problematic.  I really wonder about keeping up with the timing of the audio as it's played.  A lot of people I talk to are excited about doing precise visualizations that are timed to music.  I know my current code can improve a lot here, so it's too early to assess whether this will be possible.

I'm not sure when I'll post my next part of this series.  I am going to celebrate this success with some much needed holiday time.  Rest assured, though, that I'll be back at it again soon.