I’m working with an ever growing group of web, audio, and Mozilla developers on a project to expose audio spectrum data to JavaScript from Firefox’s audio and video elements. Today we show what we did at www2010.
I'm in Raleigh, North Carolina, with Al MacDonald for the www2010 conference. We're here to present our work on exposing audio data in the browser. Over the past month Corban, Charles, and a bunch of other friends have been working with us to refine the API and get new types of demos ready. We ended-up with 11 demos, some of which I've shown here before. Here are the others.
The first was done by Jacob Seidelin, and shows many cool 2D visualizations of audio using our API. You can see the live version on his site, or check out this video:
The second and third demos where done by Charles Cliffe, and show 3D visualizations using WebGL and his CubicVR engine. These also show off his JavaScript beat detection code. Is JavaScript fast enough to do real-time analysis of audio and synchronized 3D graphics? Yes, yes it is. The live versions are here and here, and here are some videos:
The fourth demo was done by Corban Brook and shows how audio data can be mixed live using script. Here he mutes the main audio, plays it, passes the data through a low pass filter written in JavaScript, then dumps the modified frames into a second audio element to be played. It's a technique we need to apply more widely, as it holds a lot of potential. Here's the live version, and here's a video (check out his updated JavaScript synthesizer, which we also presented):
The fifth and sixth demos were done by Al (with the help of many). When I was last in Boston, for the Processing.js meetup at Bocoup, we met with Doug Schepers from the W3C. He loved our stuff, and was talking to us about ideas that would be cool to build. He pulled out his iPhone and showed us Brian Eno's Bloom audio app. "It would be cool to do this in the browser." Yeah, it is cool, and here it is, written in a few hundred lines of JavaScript and Processing.js (video 1, video 2):
This demo also showcases the awesome work of Felipe Gomes, who has a patch to add multi-touch DOM events to Firefox. The method we've used here can be taken a lot further. Imagine being able to connect multiple browsers together for collaborative music creation, layering other audio underneath, mixing fragments vs. just oscillators, etc. We built this one in a week, and the web is capable of a lot more.
One of the main points of our talk was to emphasize that what we're talking about here isn't just a concept, and it isn't some far away future. This is real code, running in a real browser, and it's all being done in HTML5 and JavaScript. The web is fast enough to do real-time audio processing now, powerful enough and expressive enough to create music. And the community of digital music and audio hackers, visualizers, etc. are hungry for it. So hungry that they are seeking us out, downloading our hacked builds and creating gorgeous web audio applications.
We want to keep going, and we need help. We need help from those within Mozilla, the W3C, and other browsers to get this stuff into shipping browsers. We need the audio, digital music, accessibility, and web communities to come together in order to help us build js audio libraries and more sample applications. Yesterday Joe Hewitt was talking on twitter about how web browser vendors need to experiment more with non-standard APIs. I couldn't agree more, and here's a chance for people to put their money where their mouth is. Let's make audio a scriptable part of the open web.
I'm currently creating new builds of our updated patch for Firefox, and will post links to them here when I'm done. You can read more about the technical details of our work here, and get involved in the bug here. You can talk more with me on irc in the processing.js channel (I'm humph on moznet), or talk to me on twitter (@humphd) or by email. One way or another, get in touch so you can help us push this forward.