Experiments with audio, part VIII
I'm working with an ever growing group of web and Mozilla developers, along with some talented audiophiles, on a project to expose audio spectrum data to JavaScript from Firefox’s audio and video elements. Today we bring out the demo reel.
Last time I wrote about our success getting the browser to make dynamically generated sound. Since then we've continued our work to expose and use raw audio data, and it has produced some delicious demos. Today I finally got setup to record some of them and show a bit of what's possible. NOTE: I'll provide links for all of these, but you need a patched Firefox to see things happen.
The first demo is a visualization of audio spectrum data using the C++ FFT code (the one Corban did before was done with JavaScript). This particular song is perfect for demonstrating beat synchronization, wave changes, etc. Al MacDonald wrote both the code and music (click here to watch video):
The second demo shows how the <video> element can be used in the same way as <audio>. In this video of whales singing to one another, Al once again visualizes the spectrum data, but this time in 3D, and overlays it on the <video> element using <canvas> (click here to watch the video):
The third demo shows both visualization and dynamic generation of audio using JavaScript. In this demo an oscillator module generates a signal which is sent through a Low Pass filter and ADSR Envelope and then written to an <audio> element. The demo was created by Corban Brook, who also wrote the PJSAudio library especially for the task (click here to see the video):
For my part I've been busy thinking about how to rewrite my browser implementation so it is done properly. In the third demo you'll hear some static, since I'm not doing any buffering, which we need in order to deal with latency in generating the audio frames. I've got a number of email threads going with people who can help me figure out the API and implementation directions.
In addition to working on code, we've also been chatting about the other things we might do with audio in the browser. Here are some of the ideas that have come-up:
Audio Data and Experiencing 3D
As WebGL gets closer to being released, I've been thinking about accessibility within 3D scenes. Reading this inspiring article about a blind architect, it made me think about sound in a new way. I've got students right now working on light and cameras in 3D web environments, but what about sending sound in too? What if you dynamically sent sound into a 3D scene and had it echo back to you, such that you could judge depth, objects in front of you, different materials, etc. Doing this well requires dynamic transformations of the sound, which is exactly what this work is making possible (Corban's demo shows the first steps).
Audio Data and Seeing Sound
Still on the subject of accessibility, I've been thinking about sound data, and especially the visualization of sound, for those who can't hear it well, or at all. One of my colleagues first got me thinking in this direction. She can't hear a lot of the sound her computer makes, and is often unaware that web pages are even making sound. In cases like this, it would be great to have even simple indications that there is sound, how loud it is, etc.. Taking this further, one can imagine exploring ways to make the browser more like the Emoti-Chair or other similar devices, which bring the experience of sound to those who can't hear it.
Audio Data and Programmatic Approaches to Music
Then there's the possibility of making music specifically for this medium. We (I'm thinking of people like me who can't play instruments) tend to think of music as a consumable end-product vs. something you build or assemble. Music is something you buy, something you download, something you play. That's what text used to be, too. But putting text on the web, and specifically, assembling text using code, changed that forever. Just as we mix text from different sources, dynamically overlay it, transform it, translate it, etc., what if you could listen to music being mixed and altered live in your browser? What if music was one part drum loops and one part repeating sample and both were written in JavaScript? What if those got layered over audio being played from an .ogg file, or got mixed into something happening in a video? What if the music was changed in response to what you were doing in the page? When music becomes algorithmic, scriptable, and composable, any number of new things will happen.
This portion of our experiments has really been about imagining what you might do with something new. The browser hasn't traditionally been the place where sound gets made. But what if it was? How would music adapt to its new surroundings? What would we expect of it?
I'll leave you with a quotation from a recent interview with Brian Eno, as he talks about the synthesizer, and innovation in music and sound. When he says 'synthesiser', replace it with 'browser' in your mind, and think about it for a moment:
One of the important things about the synthesiser was that it came without any baggage. A piano comes with a whole history of music. There are all sorts of cultural conventions built into traditional instruments that tell you where and when that instrument comes from. When you play an instrument that does not have any such historical background you are designing sound basically. You're designing a new instrument. That's what a synthesiser is essentially. It's a constantly unfinished instrument. You finish it when you tweak it, and play around with it, and decide how to use it. You can combine a number of cultural references into one new thing.