Tuesday, June 26, 2007
For years the live/performance (rock, indie, jazz etc) and studio/composition (dance, urban etc) traditions have, to a certain extent, eyed each other nervously from polar extremes. Rock and indie have maintained their “live” moral higher ground, where the ability to reproduce songs in real time became canonised as a sacred experience. By contrast any studio act unable to compete missed out on lucrative live circuit revenue. So despite the losses in sonic impact or credibility what were ostensibly studio acts took to the live arena. Just compare Destiny’s Child (studio) as produced by Rodney Jerkins and Timbaland versus Destiny’s Child (live) with fat rock drummer and session slap-bassist. Bon Jovi tom solos and atonal Seinfeld bass indulgence are go!
Kode9’s recent performances, at the Sonar and Mutek festivals, have pushed well beyond the realm of mere DJ sets into the live arena. Spaceape takes the mic while Kode mans the live re-edit machine, Ableton. (You can see some of it here, thanks to Tobias van Veen, alongside Kode’s wicked but lesser spotted sense of humour).
With studio composition, via Ableton, increasingly edging into rock’s performance realm, it’s interesting to look into what actually separates “live” performance with “electronic/live” performance. Excluding outfit changes and dance routines (let’s not go there shall we…), the visual medium of performance, in both electronic and rock music, is tied to the physical process of making the music. That much they have in common. The emotional response to the visual element of (say) live rock, however, has long since become tied into specific, learned gestures, perceived and understood by audiences to have known meanings and emotional responses. Even DJing, which doesn’t bare a direct correlation between the movements of the performer and the parts of the track, has come to be visually appreciated by fans – just witness the drooling a technically amazing DJ like Youngsta receives from audiences. What’s interesting, therefore, is how, given the advances of new technologies, audiences respond and learn responses to new visual patterns of the functions of making live/electronic music. Or, conversely, how audiences can seemingly enjoy and respond to the perception of live audio being performed (when it’s not) when all the visual queues are being provided.
Take for example, the hip-hop/ grime-MC-as-live-act. Of course, in the beginning, there were two turntables and a microphone, somewhere in the South Bronx or Bow E3. Yet when grime-affiliate Plan B played “live” in 93 Feet East, Brick Lane, East London last month there was a microphone, two turntables, a drummer, guitarist and a bassist. Only when you looked closely did you notice that the visual guitar actions weren’t correlating with the guitar audio, that something was afoot, or moreover, coming out of the CDr deck. Did the 93 Feet East crowd mind? It was packed to the rafters.
Instead of the live act that isn’t, the electronic act that’s truly live provides a far more interesting set of opportunities. The pub fuelled ramble brought to mind few examples. There was Mathew Herbert as Radio Boy a few years back at the National Theatre, breaking Disney videos and McDonalds boxes and live-sampling them into a anti-capitalist protest, which proved more conceptual than enjoyable musical experience. More recently there’s UK’s Jamie Woon: check the jaw dropping live video here. My friend, who prefers more acoustic stuff, mentioned Argentina’s Junana Molina.
Looking at these three, are the physical processes they undergo to make live electronic music visually stimulating? The answer is probably yes, and definitely more so than a bloke twiddling a laptop, largely because there’s a palpable correlation between their physical movements and their audio output. What this conclusion opens up is the debate whether, given advances of technology, the visual angle of the music making process could be taken into account as much as the sonic considerations. Given an engaging live electronic music making process, that moral live high ground of rock might start to look distinctly vulnerable.
What is surely now up for grabs is whether the live electronic technologies will be absorbed as visually aesthetically pleasing, and beyond that whether at a certain point the visual control of live electronic performance will be a dominant priority in the software/technologies creation. So that software designers ask themselves not ‘how can I make live electronic music?’ but ‘how can I make visually engaging live electronic music?’ Maybe they already have.