Choice Cuts: Paul McEnery Visits the Boffins of CNMAT.
Quiet and scarcely ostentatious, The Center for New Music and Technology in the Berkeley hills is changing the face of sound. CNMAT connects the high ground of IRCAM and UC Berkeley to the funding worlds of the NEA, the California State Department of Commerce and Gibson guitars. This cross-pollination has produced some far-reaching innovations.
At the heart of this work is director David Wessel. His low tolerance for the standards for digital expression have provoked him into action.
"I have a real allergy to the word sample. We don't do sampling. We analyze real live sounds, and then if we want to duplicate them in their original form, we resynthesize. But we want a plastic intermediate representation of a "form" that we can manipulate and be expressive. You know, samples--you turn 'em on, you turn 'em off. You make 'em louder, you make 'em softer, and that's about it. You try to time stretch them, and what do you do? You loop em in the middle, and it sounds looped. The younger generation tends to think electronic music is samples. It's become the only way you could conceive of doing it. But indeed not. It would be like, imagine trying to make a convincing orator by just taking the words that were spoken and putting them in the sampler and playing them back. That may be how they create politicians these days. But they're pretty unconvincing politicians.
"Music culture's gotten hung up on this idea of this giant deep freezer out of which you pull this particular drum sound, this particular saxophone quality or this sound. I'd rather have some abstract notion of what makes these sounds interesting. Instead of actually being a re-animator of used body parts, you'd rather understand the underlying genetic engineering and breed some new bodies."
"I don't know what the copyright people are going to have to say. What if I turn my neural networks on a whole body of sound and have them learn something about inner structures? Are those weights in that neural network subject to copyright law? I don't know the answer to that one. I'm sure some lawyers would have a good time with it though. [laughs] ...
Which leads us neatly to the second half of Wessel's motivation--to breathe humanity into aural engineering.
"I'm an affiliate of the psychology department and this whole area of music perception and cognitive theory is a big part of what we do around here. You know, music is a really big part of the life of the mind. I mean, think about our nature; perception is there to help us interact with the world. In fact our culture seems to have pushed us more and more to records that are canned, frozen materials. I'm interested in musical forms that would allow us to perceive and act together."
"If we had ZIPI running over a dedicated network, then we can imagine doing that at at least the gestural level. I'm just thinking about the way in which some of the existing networks behave. Our experience was that we had some real latency problems. Our experience is too, that when we use Ethernet drivers straight out of the box from Macintosh we get into more latency problems. In other words, a lot of work in computing hasn't addressed what I would call the other half of the realtime problem. I mean, you'll get, quote, realtime display of continuous media once you're got the media started up. But a lot of these network protocols don't satisfy this reactivity problem."
"And then the designers of drivers! Like it was very disheartening to find out that the Apple sound drivers had a lowest latency in the 41-u-second(?jh) range and that one couldn't get around it. It's just a question of whether the engineers thought that that would be adequate for quote realtime. Well, not for live performance realtime. I mean this whole idea of working in a studio on any frozen piece of music...you don't really manipulate that on the fly like it's musical material."
"I'm very much concerned with problems of improvisation. And having music technology and representation of music material that affords an improvisatory approach. In other words, I don't want to keep licks around in my machine. I like to have some more abstract representation that will allow me to adapt this strategy to a particular context, and do it very quickly. You know, the way that a sax player can join in under a great variety of circumstances. He'll bring his tradition with him but he's able to slip and slide around in a lot of musical contexts and put a new twist on it. I want our software to be able to do that, to be able to have the musical vehicle I drive around be an all terrain vehicle."
*FAR (Fourier Analysis Synthesizer) New (trademarked) additive synthesizer software. Particularly useful for morphing of sounds and making hybrids of two sound sources.
*ZIPI A new network protocol for electronic musical instruments that overcomes MIDI's limitations. Allows for bandwidth up to 20 Mbps (MIDI's is 31.25kbps)with currently available hardware and gives musicians much more control of musical parameters.
*ZETA GUITAR INFINITY. MIDI guitar updated for ZIPI, its fuzzbox 2000, an effects processor. It treats each string independently, picks up pluck point timbre, fretboard data, etc., and treats the result as part of a signal processing chain which enables pitch-synchronous events like harmonies or sound synthesis and quarter-tone scales.
*CONDUCTING. Work following on from Don Buchla's "Lightning", neural networks learn by analyzing the gesture of the conductor. shaping, phrasing and articulation under total control. Used with "virtual player," a synthesizer with score and sound options pre-written.
*TELEPRESENT LEARNING. A project using an experimental dedicated system for remote mixing and control via ISDN and T-1 lines, developed from CNMAT'S macMix software.