View Source as Musical Innovation
I've been interested to watch the flood of reactions around the web to our latest demos and audio experiments. Here are a few:
- Al MacDonald lays out a history and potential future for our work
- Article in Create Digital Music, "Real Sound Synthesis, Now in the Browser; Possible New Standard?"
- Post on Wired's WebMonkey blog, "New HTML5 Tools Make Your Browser Sing and Dance"
- Feature of Multi-Touch Audio Data Bloop demo on weave.de
A number of comments on these and other blogs, and people on twitter, have talked about how Flash already allows some of this. Reading it as many times as I did, I wanted to respond and suggest that what we're doing isn't simply parity with Flash. I don't think it's exaggerating to say that exposing audio data to the open web has the potential to change sound, audio, and music. The reason is that HTML5 and JavaScript based audio participates in "View Source," and that means creating a whole new kind of active and passive audio collaboration.
The reason the web has grown like it has, the reason there is so much innovation, the reason so many people of varying levels of expertise can use it, or as Mike Shaver put it in 2007, the reason the web won, is View Source:
If you choose a platform that needs tools, if you give up the viral soft collaboration of View Source and copy-and-paste mashups and being able to jam jQuery in the hole that used to have Prototype in it, you lose what gave the web its distributed evolution and incrementalism. You lose what made the web great, and what made the web win.
The way View Source functions, with respect to HTML documents, is well understood. It's so well understood that its absence becomes something you can't not see. The Flash-based music visualization or audio app that runs in the browser today looks and sounds great, but that's all it does--it can't lead others to iterate and innovate. Sure, we can deploy audio on the web, but we can't tinker with it if we can't get at how it's made. Pressing 'play' isn't the same as playing.
Right now the community of people actively working on audio data in the browser is small (about 12 people that I know of), but it already points to what I'm talking about. When we made Bloop (a processing.js version of Eno's Bloom), we did it iteratively. We learned how to generate simple sounds using JavaScript, then built scales, followed by more complex wave patterns, followed by oscillators, etc. The code bounced back and forth between people on irc and twitter, before it got extracted into a reusable JavaScript library, which has already allowed Corban and Maciej to start building a multiuser synthesizer/sequencer based on our audio api, node.js, and processing.js (code is here).
This way of working is so common on the web, it's almost not worth mentioning. But I do mention it because this was, as far as I can tell, the first time people have collaborated on the web to build music using the technologies of web. And just as it has for all manner of other things, the web made building music easier and faster.
One of the properties of sound is that it bounces off things, echoes, and changes. The history of music is filled with people innovating by playing with existing sounds. Allowing sound to exist in a more manifest and malleable way on the web, to become scriptable, viewable, copy-pastable, means that innovations in sound and music will be more frequent and much, much faster.
It's a good time to be having this discussion, since the trajectory of music on the web is pointed away from sharing and collaboration (and has for a long time been). What we're plotting with an audio data API for the web isn't a new method of delivering music. This is about a new way of creating and collaborating that is stolen directly from the web's play book. The web beat every other way of working using it. What will happen to music and sound if we let it do the same?