Saturday, November 16, 2013

Sound Media Studies

When it comes to media, all of our human senses are related to the perception of frequency: the visible spectrum is that which we sense via sight, and the audio spectrum we sense via hearing. Taken together, they represent only two tiny patches of the total frequency spectrum,  and yet it's remarkable to consider that it's actually sound waves which we can hear with the greatest precision, distinguishing a wide variety of characteristics beyond frequency itself; the difficulty of programming a computer to recognize human voices using natural language is testimony to that (though Dragon may soon change everything).

Sound is also the first medium to become technically recordable, as well as the first medium to be broadcast. Sound recordings had been around for nearly 50 years before the first sound films were released, and voice radio predates television broadcasts by twenty years or more (twenty if you count any broadcast, nearly forty if you count only commercial, regular broadcasts).

Sound has been a pioneer in the digital and Internet revolutions as well, and that's easily understandable. If you take the CD audio standard of 44.1 kHz, this is only about 7% as much data as the NTSC television standard of 5.75 MHz, so it's no wonder that audio was compressed, stored, and shared long before even standard-res video (HD demands more than 3 times the data density of the old NTSC standard).

Sound, of course, was originally free and totally ephemeral; once spoken, sung, or plucked, it was gone. It then became a physical object, the cylinder and then the disc, sold to a mass public, and channeled through later forms such as the 8-track tape, cassette, and digital CD. But then, thanks to compression paradigms such as MPEG-2 audio layer 3 (originally designed to compress the sound elements of a video signal), sound became the first thing to fit through the narrow tube that was the pre-highspeed Internet; the rest, as they say, is history.

And yet, in most Media Studies programs and New Media books, sound seems relegated to a very small, supporting role. It's been naturalized, made invisible, save in its absence, as when talking about "silent" movies (which of course were never silent; before soundtracks they were nearly always accompanied by music).  And so I ask (dropping into a KRS-One tone), "Why is that?" And what should we do about it?