I've been a fan of Rafael Loazano-Hemmer's work for a while, and I did a write up on his Pulse Park here. I found out on Friday (via a nice profile piece on WNYC by Kate Taylor, which you can hear here) that he had a show at the Guggenheim, so I bought tickets and headed over there last night, and I'm glad I did.
The show, called Levels of Nothingness, was part of the Guggenheim's always interesting Works & Process series, where artists experiment and also discuss their process. The show was described in the promotional materials as:
Inspired by Vasily Kandinsky’s Yellow Sound (1912), Mexican-born Rafael Lozano-Hemmer creates an installation where colors are automatically derived from the human voice, generating an interactive light performance. Actress Isabella Rossellini will read seminal philosophical texts on skepticism, color, and perception while her voice is analyzed by computers that control a full rig of rock-and-roll concert lighting. Audience members will have the opportunity to test the color-generating microphone.
The idea is pretty cool, and definitely makes for an interesting experiment. The microphone (which, obviously, does not generate colors as described above--it generates electricity; other press materials described called the microphone "computerized", which it is not--it's a totally analog device) feeds into Lozano-Hemmer's system, which analyzes the frequency content and loudness of the signal, and also does speech regognition. The texts which Ms. Rossellini read were broken into categories; when each category was selected the system would set into a mode with a specific look, which was then modulated by the received sound. Sometimes the system made beams of white as show above; beams of color as shown here:
Or, patterns shown on the ceiling and walls:
I was speaking to a woman from Germany at the post-show reception, and she pointed out that there was no music in the show, and this made for an interesting point. I personally find that music is a heavily emotionally-loaded medium, and this may be why Mr. Lozano-Hemmer didn't use it. And this also may be why this event came off to me as a bit clinical to me. But the clinical aspect made me, as a technology and interactivity geek, think about a few things.
First, there was a point in the demo after Ms. Rossellini's performance, where an audience member and Mr. Lozano-Hemmer both spoke the same text. Mr. Lozano-Hemmer commented that the difference was obvious to him since he had programmed it, but it wasn't obvious to me (or, apparently, many of the audience members). I mean, I could see that it was different, but I couldn't see specifically how. I wonder if this affects the audience's ability to relate to the piece? If it were a bit simpler, with a more obvious relationship between the sounds and the light, would they be more engaged?
Next, response time. Years ago, when I was working at Production Arts Lighting, we got a call from Brian DePalma's people. DePalma (who I had encountered before on The Untouchables when working for Bran Ferren) was shooting Carlito's Way, and he wanted a scene entirely "illuminated" by the flash of a (blank) gun. They did some tests, and, if I remember correctly, the gun flash wasn't bright enough, and too short to be adequately exposed on the camera. They wanted to take a big 5K fresnel, and have it respond to the sound of the gunshots. We didn't have a lot of time, so we borrowed a pitch-MIDI converter, ran it through Bars and Pipes on an Amiga, and then I wrote some filters there that would generate MIDI messages for a lighting console, that would fire a dimmer which would then light up the 5K. It was very reliable, but with all that early 90's technology, very slow. We did some gun shots, and by the time everything was captured and processed, and the 5K heated up, it was like a second late. It looked pretty cool but was too slow in general to achieve the desired effect. Things have improved dramatically these days, of course, but Mr. Lozano-Hemmer's piece suffered a bit of delay as well, which is inevitiable (I've had similar discussions about this issue with Holger Forterer, who is trying to do really complex video processing within one frame time, and Robert Lepage's engineers). But Lozano-Hemmer must have recognized this and adjusted his cueing appropriately, so while the delay was very noticeable to me, I wonder if the audience noticed?
Finally: Interactivity, and in that vein, time for another story. I went to see a performance years ago by Troika Ranch, which featured sound clips triggered and manipulated by data generated by the dancers via sensors on particular joints (elbow, etc) and wirelessly transmitted to the control system. I thought it was very cool, but my date, who is not particularly technical, didn't even notice until the post-show discussion. How did the interactivity affect the piece?
These are areas I'm very interested in researching--what are the impacts on the audience and the performers of tight synchronization of technical elements, and also true interactivity? I've been preaching the virtues of both for a long time, but would the audience even notice?
Mr Lozano-Hemmer used all Vari*Lites (they looked like VL3000's to me, but hey, I'm a sound guy), which were provided by my friends over at Scharff Weisberg, who, as always, are on the bleeding edge of this stuff. The show runs again one last time tonight (why this entry is a bit rushed--I wanted to get it done in case you have a chance to see it).