Synthetic Performer


The Synthetic Performer is a computer system for virtual performer, able to provide live accompaniment during performances involving traditional instruments.

Premises – In developing this system, Barry Vercoe begins with the observation of the processes that take place in musical performances with traditional instruments. Among the various performers is established, in form of empathic relationship, a continuous exchange of information fed from hearing each different musical parts. This process becomes more complicated when we consider executions that integrate the use of magnetic tape. In this case, empathic relationship is lacking and the performer is limited to a careful listening of the time speed of magnetic tape, in order to synchronize their execution, knowing that they could not entrust to any help from the technology system. With respect to this issue, Vercoe wondered if, and how, it can transfer in a hybrid performance environment (human performers and technology) that empathic relationship mentioned above.

Brief history – Barry Vercoe, before the Synthetic Performer, had started work on developing computer systems for real-time. In 1971 he worked at M.I.T. for design a digital synthesizer geared towards live performance. In 1973 this activity was carried out with the collaboration of some engineers of the Lincoln laboratory. The work was not completed but all this experience would provide the basic principles for the subsequent research. In the early eighties Vercoe was called at Ircam, in Paris, for holding a computer music work that integrates the flute music of Lawrence Beauregard (those years flautist for the Ensemble Intercontemporain) with the 4X digital processor by Giuseppe Di Giugno.[1] Meanwhile Beauregard and Miller Puckette had already tried to use the flute in combination with the 4X.[2]

The project – The Synthetic Performer was developed in C programming language, for the PDP-11/55 computer that controlled the 4X audio processor.[1] In addition to the 4X processor, the experimentation was conducted assuming the use of the Yamaha DX7 digital synthesizer instead of the 4X.[3] Starting from the model of human interaction, Barry Vercoe has identified three operational modules to be developed in computer science: Listening, Learning, and Execution. During execution, the computer had to be able to capture and analyze information in progress, in particular identifying the speed of execution, the dynamics and the position relative to the score (Listening), synchronize their execution with that of the performer (Execution), record execution errors or difficulties of execution and store data for a future application (Learning). The computer was programmed to handle all these situations, but the Learning step was never enabled during the various experiments, as it involved some complications.[1] The technological apparatus, however, provided some optical sensors applied to the flute keys, also a microphone, connected to digital filters, which captured the performance on the flute. All this made it possible to obtain information with respect to execution speed and frequencies emitted. Through this information, and with the use of a score previously stored, the computer was able to perform parts that are coherent with that of the performer, in terms of harmonic and tempo.

Presentation – In 1984, at the International Computer Music Conference in Paris, at Ircam, Veroce and Beauregard presented their project to the international community. Here you can watch a portion of the video demonstration prepared for the occasion. Beauregard performs a Sonata for flute by George Frideric Handel with the computer accompaniment that uses synthesized sounds like harpsichord.[2]

Other experiments were also conducted on the works of Bach and with the Sonatine for flute and piano (1946) by Pierre Boulez, so as to demonstrate the effectiveness of the Synthetic Performer even in executive contexts of contemporary music.

Back in the U.S. – Once back in the United States, Vercoe prepared a new version of the Synthetic Performer reworked for Apple Macintosh II computers at MIT, where he was operational until the early nineties. Finally, remember that different parts of the Synthetic Performer, being implemented in C, were later reused by Vercoe to develop Csound.[2]

 

For this topic I’ve read:

[1] Barry Vercoe, The Synthetic Performer in the Context of Live Performance, Proceedings of the International Computer Music Conference, Paris, 1984.
[2] Barry Vercoe, Foreword in The Csound Book, a cura di Richard Boulanger, The Mit Press, New York, 2000.
[3] Barry Vercoe, Miller Puckette, Synthetic Rehearsal: Training the Synthetic Perfomer, Proceedings of the International Computer Music Conference, Vancouver, 1985.

Leave a comment

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>