Music N is an acronym, coined retrospectively, by which commonly indicates a set of computer music languages, developed over forty years. Although made by different people and in different contexts, these languages have in common certain characteristics that have led, in fact, to speak of a single family membership.
Background history – The History of Music N begins with research carried out by Max Mathews at Bell Laboratories, beginning in the fifties. In 1957 he was made the Music I, the founder of which, in different ways, derive the following languages. Mathews’s work was of great importance for future developments in computer music, not only in relation to the development of the Music N but also because these languages has fueled the research in other centers in the United States and Europe. In Music N, “N” is used according to the direction of mathematical terminology: umpteenth. In fact, though developed in different centers, and interest of different people, languages of this family have carried out initial research inaugurated by Max Mathews. The new languages are identified simply by numbers, often progressive, different from its predecessor. In this regard it should be noted that there are cases where you can talk about new versions over the old one. Such is the case of Music II or the Music IVBF which are updated versions of, respectively, The Music I and the Music IVB. In other cases, however, is more correct to speak of entirely new languages, so, for example, the Music 360 can not be considered a new version of Music IV, which also is based, but a new language that retains the characteristics of its predecessor, more or less distant, but it also introduces new, often more substantial.[1]

Family tree – Is very useful in order to better understand the development of different languages, providing graphs or tables. The first will allow us to understand the chronological development of the Music N, allowing us also point out the people and places involved. Then we present a graph illustrating the respective lineages of each of the Music N:
YEAR | VERSION | PLACE | AUTHOR |
1957 | Music I | Bell Labs (New York) | Max Mathews |
1958 | Music II | Bell Labs (New York) | Max Mathews |
1960 | Music III | Bell Labs (New York) | Max Mathews |
1963 | Music IV | Bell Labs (New York) | Max Mathews |
1963 | Music IVB | Princeton University | Hubert Howe, Godfrey Winham |
1965 | Music IVF | Argonne Laboratories (Chicago) | Arthur Roberts |
1966 | Music IVBF | Princeton University | Hubert Howe, Godfrey Winham |
1966 | Music 6 | Stanford University | Dave Poole |
1968 | Music V | Bell Labs (New York) | Max Mathews |
1969 | Music 360 | Princeton University | Barry Vercoe |
1969 | Music 10 | Stanford University | John Chowning, James Moorer |
1970 | Music 7 | Queen’s College (New York) | Hubert Howe, Godfrey Winham |
1973 | Music 11 | M.I.T. | Barry Vercoe |
1977 | Mus10 | Stanford University | Leland Smith, John Tovar |
1980 | Cmusic | University of California | Richard Moore |
1984 | Cmix | Princeton University | Paul Lansky |
1985 | Music 4C | University of Illinois | James Beauchamp, Scott Aurenz |
1986 | Csound | M.I.T. | Barry Vercoe |
The Music N lineages within the family, however, are structured as follows:
Features – Beyond the direct relationships, what unites these languages and allowed, after all, one can speak of a single family of languages for computer music, is based on certain characteristics which, albeit in more or less obvious variations, we find consistently in the different versions. The elements, at least to name a few, are alphanumeric approach, the use of the Units Generator (UG, by this time), use in deferred time.
The alphanumeric approach – This is one of the most obvious characteristic of the Music N languages. This means that the definition of musical parameters, such as the definition of an instrument or the specific characteristics of a sound (such as its duration, pitch, amplitude, etc.) is through a definition that uses letters and numbers. To use these programs, it is therefore important to use text editor through which to create files that will be read or interpreted by a compiler. This is a markedly different approach from that adopted in other professional software, such as Max/Msp (but also more commercial Reaktor), which is based on the use of graphic media that facilitate the approach to composition. The fact remains, however, that over the years, researchers have been interested (even during the early years of computer music) to the development of tools and/or utilities that would allow a graphical approach to composition. This is the case, for example, of the Graphic 1 by Max Mathews, designed for the Music IV, or even Barry Vercoe OEDIT, designed for the Music 11; until you get to Csound, for which were developed Cecilia.
Units Generator – The issue of UG is much more complex. The concept of UG has been developed and applied for the first time by Max Mathews with the Music III. It can be argued that the introduction of UG has strongly influenced the subsequent development of computer music, so that we find in almost all languages or software developed during the following decades. Here we’ll just say that UG (Units Generator), an IT perspective, are macros that perform different functions that can be useful in the generation or control of the sounds. UG, for example, are oscillators, filters, amplitude envelopes, but also the delay, the spatialization and so on. Through the connection of individual UG is thus possible to make their own instruments, have a greater or lesser complexity depending on the personal needs of the composer. In those years it was a revolutionary approach, highly innovative; whose importance is manifested by the constant use over the years.
Deferred time – Another aspect that unites the languages belonging to this family, is related to their use in deferred time. This is clearly an undesired aspect, at least in the early years, and not necessarily exclusive of Music N. The problem with these languages, of course, is primarily tied to the limits imposed by hardware too slow (and expensive at the same time) to enable real-time computer-based composition. You will have to wait for the development of Csound to have the first version of a Music N could also operate in real time (even Cmusic before Csound, was used in real time, but only within the Carl System).
Programming – Having covered a period of over thirty years, the Music N have been programmed by drawing, from time to time, from different languages. In general we can say that we have moved from assembler, diversified according to computer models used, to Fortran and, in recent years, the C language. It can be helpful at this point, provide a explanatory table that summarizing the different language adopted in the different versions of Music N:
YEAR | VERSION | PROGRAMMING LANGUAGE |
1957 | Music I | Assembler (704) |
1958 | Music II | Assembler (704) |
1960 | Music III | Assembler (7090) |
1963 | Music IV | Assembler (7094) |
1963 | Music IVB | Assembler Befap (7094) |
1965 | Music IVF | Fortran |
1966 | Music IVBF | Fortran |
1966 | Music 6 | Fortran |
1968 | Music V | Fortran |
1969 | Music 360 | Assembler (IBM 360), Fortran |
1969 | Music 10 | Assembler (PDP-10) |
1970 | Music 7 | Fortran |
1973 | Music 11 | Assembler (PDP-11), Fortran |
1977 | Mus10 | Assembler (DEC KL 10), Algol |
1980 | Cmusic | C, Csh, Cpp |
1984 | Cmix | Assembler (IBM 730), C, MINC |
1985 | Music 4C | C |
1986 | Csound | C |
Instrument and Score – One last feature that is underlined is related to the operation of these languages. Being based on an alphanumeric approach, the Music N synthesis techniques allow the creation or particular types of instruments through the interconnection of the various UG. This means that the output of a UG, for example an oscillator, can be used as input to another UG (for example, to modulate the frequency of a second oscillator). The definition of a particular synthesis technique, or the realization of a certain generation system, or sound control, takes place in a section called Instrument. This joins the section called Score, where you define the instruments that will be played and in what manner, or for how long and from that moment. On this point, the literature is full of criticism, formulated as early as the sixties, to the inadequacy of the terminology used. In particular, many believe that it is misleading to continue to use terminology which belongs to a tradition quite different from the electronic music. Much has been done, with the later software to eliminate that, for many, it is an error of approach.
Conclusion – Although the Music N have brought with them, in different historical periods, several critical aspects, and although today it is possible to use languages or software so sophisticated but easier to use (think of Max as well as Pure Data or SuperCollider, to name a few) is beyond doubt that their importance to the development and history of computer music has been remarkable. Evidenced by the many composers who have adopted for their compositions, the research centers that have used it (in Europe we only quote the IRCAM, the Groupe de Recherches Musicales of Pierre Schaeffer, the Centro di Sonologia Computazionale of Padua, the EMS in Stockholm, several centers in England, etc..) and by many other software have been developed that, in one way or another, have looked to the tradition of Music N.
For this topic I’ve read.
[1] Alex Di Nunzio, Genesi, sviluppo e diffusione del software Music N nella storia della composizione informatica, Thesis, Università di Bologna: D.A.M.S. Musica, 2010.
JMunce
says:So how was this music “played”?
How did a person use it to create a song? How did they forumlate a distinct sound, and how did they trigger that sound?