= page 17-25 = John Chowning On Composition by Curtis Roads
John Chowning (b. 1934, Salem, New Jersey) is the founder and director of the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University in California. This interview took place the afternoon of 29 April 1982 at CCRMA.
Roads: Could you tell us about your early musical experiences and education?
Chowning: My background is thoroughly traditional. I started playing violin as a child, and later I played percussion instruments in my teens. I became interested in jazz in high school. I then went to the Navy School of Music for three years, during the Korean War. I had a lot of exposure to some awfully good musicians -- people like Nat and Cannonball Adderly. There was a lot of good jazz activity.
Then after the Navy I went to college. That's when I became interested in composition. I improvised a lot as a percussionist and became more and more interested in composing. Following college I studied with Nadia Boulanger in Paris for three years, from 1959 to 1962.
Roads: Was there anything about the European musical scene at that time that especially interested you as a composer?
Chowning: The electronic music. That was a very active time in Paris. Pierre Boulez had the Domaine Musicale concert series going. I heard all the current performances of important composers being done there, like Stockhausen's Kontakte, Berio's Circles, and new pieces by Haubenstock-Ramati and Henri Pousseur, for example. So it was really lively -- quite in contradiction to the Boulanger environment. In fact, that wore me out, I must say. After about a year and a half I was ready to stop. The third year I just wrote music and participated in the concerts.
Then I came to Stanford, where I was to do my graduate work. Largely as a result of my exposure to electronic music in Paris, I inquired about the possibility of electronic music here. There was no studio -- and certainly no interest. However, they did have a rather good computer for the time, an IBM 7090. This was a great big machine in those days. It shared a disk with a DEC PDP-1. It was the beginning of the Artificial Intelligence Project here with John McCarthy, who had come from MIT in 1962.
So, with the help of David Poole and by the courtesy of McCarthy, we got Max Mathews' program Music IV going on the 7090. The sample data was written onto the shared disk, and we used the PDP-1 as a kind of buffer to the x-y digital-to-analog converters on the DECscope [a display terminal] for sound output. The first sound we made was in September of 1964.
Roads: How did your musical background affect your later compositional thinking?
Chowning: The rigorous education one gets in music, such as harmony and counterpoint, is still an important part of the way I think -- especially counterpoint. I agree with Luciano Berio in that I believe the study of counterpoint pays off. There's probably no other way to gain an insight into the working of musical lines like going through species counterpoint. That's very much a part of me despite the fact that computers figure most prominently in my musical world today.
Improvisation also affects me deeply. The freedom one has in improvisation seems opposite to the rigor of counterpoint.
SOUND IN SPACE
Roads: When did you begin your research into the computer-controlled movement of sounds in space?
Chowning: That was my first project in 1964 when I started. It came from thoughts that were common in contemporary music at the time. There was plenty of electronic music in Europe at the time which attempted to utilize space in a fairly primitive way. Nevertheless, the idea was there.
Some of the computer research I did was obvious and some was not. The obvious work involved using multiple channels of sound to build up an image of a source at some arbitrary angle with respect to the listener. The question of distance, and the relationship of distance to reverberation, was not well understood at that time. I think that research was more interesting, and we are only beginning to realize the consequences of it. I can talk a little bit about that in a moment. The use of Doppler shift was a natural consequence of moving a sound at an angle over some distance.
Roads: Could you explain Doppler shift for the benefit of our readers?
Chowning: Doppler shift is the change in frequency that occurs when a sound source is moving toward or away from the listener. If I have a buzzer on a string and I'm twirling it over my head, I don't hear any Doppler shift. This is because there is no change in relation to my position; there is a constant radius. But you, the listener, standing near the perimeter of the buzzer's trajectory, will hear a pronounced Doppler shift. The sound will increase in frequency as it comes toward you and decrease as it goes away. In any case, it's a cue to the motion of sound in space -- in particular, to the radial velocity of a sound, as opposed to angular velocity. So what I did was write a program that incorporated a distance cue, an angular cue, and a velocity, in such a way that a composer could use it gesturally. A composer could specify geometrical sound paths in a two- or three dimensional space (Chowning 1971).
Roads: In which compositions did you use these spatial programs?
Chowning: In Turenas I made extensive use of these programs, I also used them in my first computer piece Sabelithe (1971). Turenas, which is a four-channel composition, was probably the most effective use.
TURENAS AND FM
Roads: When was Turenas composed?
Chowning: It was completed in the spring of 1972. The compositional work spanned several years, however. I was involved with writing the spatial manipulation programs for some time, and Turenas made extensive use of that experimentation. It's hard to say when a composition begins if research is tied so intrinsically to a work. The piece evolved over a period of years, and I finally finished it after I concluded that I had enough of the music-gestural control over the computer.
Roads: Turenas is based on the frequency modulation (FM) sound synthesis technique, a technique based on your own research. How is FM used in Turenas?
Chowning: FM is something I stumbled upon in the mid-1960s. It turned out that one could, in a sense, "cheat on nature." By modulating the frequency of one oscillator (the carrier) by means of another oscillator (the modulator), one can generate a spectrum that has considerably more components that would be provided by either of the two alone (Chowning 1973).
There's another important aspect. FM provides a simple way to get dynamic control of the spectrum, which is one of the aspects of natural sounds that was very difficult to reproduce with analog synthesizers. So FM is a synthesis technique that is useful or not depending upon the type of control one desires. It turns out to be quite widely used, and its usefulness is that it provides a few handles onto a large timbral space.
In Turenas, I used only the FM technique for generating the tones. I used it in both a harmonic series mode and a noisy inharmonic series mode, with transformations between the two. One of the compositional uses of FM was in timbral transformation. This was often coupled with spatial manipulation. As the sounds crossed the space they underwent a timbral transformation.
Roads: How was this accomplished?
Chowning: There were a number of techniques. Sometimes there were very slow transformations from harmonic series timbres to other harmonic series timbres -- from rich double-reedy sounds to flutelike sounds. In that case, there was a gradual change in modulation index. Other kinds of transformations in the piece had to do with changes from harmonic to inharmonic spectra or the inverse, through a gradual change in the carrier-to-modulator (c:m) ratio.
Roads: Would you say there's a kind of dualism in your music based on competing tendencies towards rigor and improvisation?
Chowning: Yes. Stria (1978) was rigorously composed. Turenas was much more improvisatory. They both feel natural to me. Stria was probably the most fun piece I have ever composed.
Roads: That was rigorous composition.
Chowning: Right. I just got into it. It was the first time I'd tried to use a high-level programming language to realize a composition in toto. I learned a lot and I enjoyed the rigor of it all. Then at some point it became magical when it was all working!
Roads: How was Stria organized?
Chowning: It was based on an idea that occurred in the early 1970s. Just after I'd finished Turenas I was doing some experiments with FM synthesis using inharmonic spectra. I marveled at the fact that in setting inharmonic ratios between carriers and modulators, that unlike in nature, there was a perceptible order when one moved through the frequency space with a constant spectrum. Even when I changed the envelopes, there seemed to be something remaining that was certainly distinct from the harmonic series but was still ordered.
Then when I was in Berlin in 1974 and had no computer to use, but had lots of time, I thought about all this. I was looking for an inharmonic ratio such that the components would be powers of some basic ratio. It turns out that the Golden Mean (1.608) is such a number. If one has a c:m ratio that is 1 to some power of the Golden Mean, then several of the low-order spectral side components are also powers of the Golden Mean.
What I did was draw an analogy between this inharmonic spectrum -- including a frequency space where the pseudo-octave is at powers of the Golden Mean -- and the harmonic series and tonality, where the low-order components of the harmonic series are also the principal intervals of the tonal system -- the octave, the fifth, and so on. I drew this loose analogy and wrote some programs to help me compose, in particular to help me with the sound synthesis. It was not automatic composition by any means, but there were rules for determining the details of the structure, from the microsound level up to the level of a phrase.
In Stria, all frequency components are based on powers of the Golden Mean in the c:m ratios. Then I divided up the frequency space so there was some degree of complementarity. So it is all very cohesive perceptually, even though it's inharmonic and sounds a little strange. But it doesn't take long, even for a naive listener, to realize that even though it's strange it's cohesive at a deep level. I believe this is because of the unified structure of spectral formation.
SYNTHESIS OF THE SINGING VOICE AND PHONE
Roads: When did you go to IRCAM, the French musical research institute?
Chowning: I was associated with some of the plans at a developmental stage in the mid-1970s. I made some of my thoughts known about interesting directions. Others from CCRMA, including Andy Moorer, John Grey, and Loren Rush, were also involved. Then I went there for about eleven months in 1979 and 1980. I developed some algorithms based on FM for synthesis of sung vocal tones (Chowning 1980).
Roads: You used these tones in your composition Phone.
Chowning: That's right. Phone is based exclusively on the use of this algorithm. The idea was inspired by some work of Michael McNabb's here on the additive synthesis of sung vocal tones. I hadn't intended to work on that when I went to IRCAM, but I took it on in order to familiarize myself with their system. It turned out that Johan Sundberg was there at the time, a wonderful scientist from Sweden. He has done considerable work in the analysis of the singing voice. So I had this tremendous resource at my elbow, and I was seduced by the problem.
I became extraordinarily interested in naturalness. I found that all the previous attempts at vocal synthesis really lacked something. So I developed this algorithm and tried to embed in it as many performance characteristics as I could. This meant understanding them. For example, how much randomness in periodic vibrato must be present in order to create a convincing impression? Or, must a sung vocal tone have a little portamento in the attack? Or, how do the formats behave during the attack and decay portion of a sung vowel? It turns out that all these things are very important. My stay at IRCAM could be characterized as "tending to detail."
Having done all this, I found that interesting ambiguities occurred if there was neither periodic nor random microfrequency variation. One can make sounds that sound like an instrument and then evolve into vocal-like tones.
Roads: Where does Phone stand on the scale of rigorous organization versus improvisation?
Chowning: Right in the middle. I also used computer programs to control the low-level synthesis as in Stria, but I think there was more fantasy in its composition.
THE SOURCE OF COMPOSITIONAL IDEAS
Roads: Where do your compositional ideas come from? Do they come from imagining large-scale structures or processes, or do they come from within the sounds themselves?
Chowning: They come from several sources. Certainly all the time that I and others have spent over the years looking at the internal workings of sound at the microstructural level has influenced the way we proceed. This is something that in traditional composition one doesn't normally do. There is no doubt that Stria evolved from a microstructural notion. The piece as a whole reflects the shape of the event in its smallest unit.
But I must say I get a great deal of inspiration from computer programming languages. The idea of a procedural language reeks of music somehow. I've just barely touched that domain. It's clear to me from watching others work in this lab, using programs like Bill Schottstaedt's Pla program, that computer languages are extraordinary resources.
Most of the music being written here at CCRMA involves powerful algorithmic processes. It is very different from the note-by-note Music V kind of input to the computer. These algorithmic approaches are obviously rich because they are being used so widely and the music is so good.
The language is important. It is a lot easier to do things in a modern high-level language than it was with FORTRAN or assembler, for example. More and more, the musical idea evolves from a kind of cyclical interaction with the language. One asks something of the language and it yields more than you asked for. That's not surprising since the language represents thousands of years of thought about thought.
Then, of course, fantasy is another component of my compositional thinking. I can't talk very much about that because I do not know how to talk about it.
There is another aspect to this, one which we play upon to good effect here at Stanford. One can present music in a concert situation in a manner it cannot be presented at home, using very fine audio equipment in a carefully planned context. We do outdoor concerts. Our audience is growing to the point where we now attract several hundred people to a concert. Well now, that's rather extraordinary for nothing but "tape music." I think the proof of the matter is that it can't be dead if it's alive and well at Stanford and a number of other places.
Sure, we would all like to have more performance involved. I don't think any of us who are working in the medium feel that performance is to be excluded -- quite the contrary. For years and years we have wished that digital systems were cheaper and smaller such that we could introduce the performer into the complex. We hope it will happen -- and soon -- but it's not exclusive, it's additive -- another use of the computer.
Chowning, J. 1971. The simulation of moving sound sources. Journal of the Audio Engineering Society 19(1):2-6.
Chowning, J. 1972. The synthesis of complex tones by means of frequency modulation. Journal of the Audio Engineering Society 21(7):526-34. Reprinted in C. Roads and J. Strawn, eds. 1985. Foundations of computer music. Cambridge: MIT Press.
Chowning, J. 1980. Synthesis of the singing voice by frequency modulation. In E. Jansson and J. Sundberg, eds. 1980. Sound generation in winds, strings, and computers, 4-13. Pub. No. 29. Stockholm: Royal Swedish Academy of Music.
Schrader, B. 1982. Introduction to electro-acoustic music. Englewood Cliffs, N.J.: Prentice-Hall.
= end page 17-25=
Typed by Cheryl Vega 4-23-95