[ 2nd of 3 parts]
- pg 44 -
The Synthesizer: An Interactive Electronic Wave Instrument
Here the term synthesizer is used in its generic sense. It refers to instruments that combine the constituent elements of separate entities into a single unified entity, a complex whole formed by the combination and integration of diverse functions. In this sense, the term synthesizer includes analog and digital electronic instruments in the arts.
The key to unlocking the synthesizer's power resides in understanding the extensive ramifications of its interactive nature. Many of the internal functions or activities of the most advanced instruments are designed to act upon one another reciprocally. Each function can be programmed to be affected or controlled by the activity put out by itself, another function, or a collection of functions. In turn the controlling function can be affected by the original function (by way of a feedback loop) or a totally different function or set of functions. In brief, the output of a function can be used to control itself and/or the operation of another function or set of functions.
Fundamental to the notion of interaction is the principle of feedback (see Figure 3-1). Feedback involves the return of part of a system's output to the input for the purposes of control or adjustment. Figure 3-1A illustrates the simplest type of feedback: part of the output is returned as input. Figure 3-1B illustrates a more complex type: a second function is added to the loop. Figure 3-1C illustrates and even more complex type: Feedback loops are contained within or nested in a feedback loop. Feedback variations run to very high numbers; the exact number depends upon the number of variable elements in the system.
In general, the synthesizer is composed of a collection of electronic functions for generating, processing, transforming, modifying, converting, and controlling electronic waves that eventually become sound and light forms. The synthesizer includes functions for translating or converting acoustic or mechanical waves to electronic waves to make the information in the acoustic waves compatible with the electronic system. The translation functions provide communication channels for external systems. For example, the waves produced by a clarinet or any acoustic source can be translated by a microphone and preamplifier to electronic waves so they can be modified by the synthesizer, or they can be used to modify the synthesizer's activities.
- pg 50, 51 -
Turning a dial to adjust an oscillator's frequency requires the use of a hand. The breakthrough in the electronic arts occurred when the concept of voltage control was developed to simulate tedious and cumbersome manual activities. From that modest conceptual beginning the application of the principles of voltage control quickly evolved beyond conceptions based on the hand to those emerging from purely electronic insights. New tools are often initially used for the impoverished simulation of an already existing medium rather than for the creation of forms based on the uniquely expressive capacity of the new tools. It is the principle of voltage control that establishes the synthesizer as a truly new instrument with powerful expressive and educative capabilities.
...In general the operating characteristic of a voltage-controlled circuit is subject to the direct influence of an applied voltage and will closely correspond to the fluctuations in that voltage.
- pg 101 thru 103 -
During the 1960s Leland Smith and John Chowning of Stanford University set up a computer music program modeled on the research at Bell Labs. During the early 1970s Chowning developed a new application of frequency modulation that controlled the dynamic characteristics of spectral components to simulate natural sounds and to create new sounds. As the result of research in the synthesis of natural sound, it was discovered that timbre is determined in large part by the relative dynamic characteristics of partials. The amplitudes of partials normally evolve in complex ways, particularly during the onset and decay periods of the sound. Chowning's FM technique was based on those principles.
Chowning's elegant technique applied frequency modulation (FM) equations to the production of FM sounds in which the carrier and the modulating frequencies are in the audio range and the sideband frequencies make up the spectrum. In general, the frequency of the FM carrier wave changes in synchronism with the amplitude of the modulating wave. The amplitude of the modulating wave determines the carrier wave's deviation from its average frequency, a parameter called peak deviation. By carefully balancing the carrier and modulating waves' frequencies, amplitudes, and modulation index (the ratio of the peak deviation to the dominating frequency), Chowning's FM technique proved to be a viable perceptual model for natural sounds and a rich source of new sounds.
During the 1970s, John Chowning's work on computer music at Stanford's Artificial Intelligence Laboratory attracted the attention and eventual collaboration of an outstanding interdisciplinary group of young researchers. the group, including composers, theorists, mathematicians, psychoacousticians, electrical engineers, and systems designers, established itself as the leading center of research in computer music and acoustics. In 1977, members of the group set up a quarterly, Computer Music Journal (CMJ) devoted to musical applications of digital electronics. The CMJ rapidly became the principal international communication channel for computer music practitioners. [actually CMJ was established independently of Stanford jh]
The CMJ took advantage of its home base in the San Francisco Bay Area, the heart and brain of the space-age electronics industry in the 1970s, to discover and divulge the latest rumors about hardware and software advances in the computer field that related in any way to music and acoustics. Micro-processors, converters, powerful integrated circuits, interfaces, special applications, new digital instruments, digital recording and playback systems, and memory advances were presented and discussed in the CMJ as seeding of brainstorms many months before the items were released commercially.
The CMJ included articles on applications and improvements of established computer music techniques such as additive, subtractive, and FM synthesis. It contained descriptions and evaluations of computer music publications and emerging digital products related to the music field. Some of the more technical articles described special techniques for maximizing the computer's power in analyzing and generating sound events. Many of the articles were written by researchers reporting the results of their latest experimental work.
Although the computer facilities at Stanford are far from modest and many of the CMJ's articles related mainly to large installations, some attention was given to more economical approaches to computer music such as microcomputers and hybrid systems.
A special feature of the CMJ was a lexicon of computer-analyzed tones from traditional instruments. The lexicon was presented in successive issues with each part focusing on particular instruments. The articles relied heavily on computer-generated graphs that indicated the amplitude of each of a tone's harmonics as a function of time. The graphs included spectrographic plots and perspective plots illustrating at a glance the dynamic relationships of a tone's harmonics; they also included amplitude and frequency plots of individual partials. Many of the articles were written by researchers reporting the results of their latest experimental work.
- pg 189 -
... Overshoot is a measure of how far the system exceeds the command signal position (see Figure 7-8). Overshoot can be corrected by damping but this tends to slow down the response of the system. Damping techniques serve to prevent or hinder the vibratory motion of a system capable of free oscillations.
Since both overshoot and slow response are distortions of the normally desired response to a command signal, equipment designers usually settle for a compromise by balancing the distortions into an acceptable range. With good effect, artists can consider the distortions as compositional variables in search of an appropriate context.
At the center of the sphere of the electronic arts is the notion of the synthesizer as a wave instrument, an instrument that generates, controls and transforms electrical waves in modes analogous to much that we know intuitively, psychologically, and scientifically of the world of living phenomena. Harmonic forces, that is, periodic oscillations that have integral-multiple relationships, give shape to our experience by providing easily recognizable references. Integral-multiple relationships are whole-number relationships like those found in the harmonic series, that highly ordered collection of simultaneous vibrations that is characteristic of a vibrating string. ...
- pg 192 -
The electronic arts of sound and light grow out of the same general approach to systems design. The basic system is a network of temporal events with a recognizable structure, a complex of interactive elements subject to individual changes and changing sets of relationships, and organization consisting of hierarchically and laterally related subsystems. The electronic arts are based on multidimensional systems undergoing dynamic and symbolically significant transformations. The patch or instrument design can be viewed as a collection of interrelated time functions operating on wave variables. From one perspective each function has its own existence and its own time. From another perspective the entire system constitutes an integrated whole with no separable elements. The time of the whole is of a higher order than the individual times of the elements.
There is no reason to expect instrument designs that produce desirable events in one sensory sphere to produce the same in another. Each receptor system and its associated memory evolved to respond to different ranges and temporal configurations of the vibrational spectrum.