Unit 2: Three Stages of Digital Art

Synopsis

While the groundwork for computer music and multimedia as we practice it today had been laid during the first part of the 20th century, the development of the digital computer during the middle of the previous century was a catalyst of gargantuan impact. Yet the beginnings of digital art were modest as artists had a vision but didn’t really know how to teach a computer to accomplish what they were after. The post-war “tabula rasa” period in Europe and the United States was one of experimentation and speculation not just in digital art and much of what was produced was informed by the budding fields of information and communication theory. Yet in 1965 John R. Pierce expressed his frustration with the digital computer: “As a musical instrument, the computer has unlimited potentialities for uttering sound. It can, in fact, produce strings of numbers representing any conceivable or hearable sound… Wonderful things would come out of that box if only we knew how to evoke them.” This chimed in a new phase in the exploration of the digital computer for art making. In the early 1970s, the first computer music centers were established as platforms for interdisciplinary research where composers, engineers, scholars and scientists were invited to collaborate. With personal computers being slow and costly, those centers remained major attractors for the technically minded composers and the artistically minded technologists, until the advent of broadband Internet and the development of cheap and extraordinarily efficient CPUs led to a paradigm shift around the turn of the millennium. The systematic exploration of the affordances of the computer gave way to a more intuitive and interactive manipulation of digital interfaces both for the acquisition of knowledge and the production of art. The postdigital era was born.

Introduction

Multimedia can be conceptually understood as a co-evolution of artistic expressions and computers, shaped by memes that have dominated our thinking since World War II. Among these are the axioms of information theory or cybernetics, which since their beginnings in the early 1940s have spread rapidly into seemingly unrelated fields such as literature, the visual arts, and music. It comes as no surprise that both cyberneticists have sought to apply their principles to the arts and artists have longed to use computers to formalize creative processes. This co-evolution has gone through three stages, which we could call speculative stage (1950-70), exploratory stages (1970-2000), and interactive stage (since 2000). Particularly in the transition to the last stage, we observe a paradigm shift that also goes hand in hand with a changing role of the arts and media in the globalized world. Some also refer to it as the Postdigital.

One of the main achievements of gender studies is the recognition that the history of the arts (and particularly that of music) is a construct to which, depending on the perspective of the observer, alternative histories are conceivable. The same holds when attempting to provide a historical overview of the inspiration that computers have had on artists. Apart from the fact that alternative understandings of this history exist, justice cannot be done here in this unit to all the protagonists who have contributed significantly to this subject, and it is due to the nature of this presentation that a primary focus will be laid on music.

The interaction between technology and the arts does not have to be a direct one, but one mediated by the so-called zeitgeist. The term zeitgeist succinctly stands for the phenomenon of memes - concepts and ideas - that can appear simultaneously in different places and then spread throughout wide areas of culture. Thus it becomes understandable that the computer or information science can be seen as an inspiration even when artists themselves did not work with computers or even - like Karlheinz Stockhausen, for example - were highly skeptical of the machine. Against this backdrop it becomes clear why Stockhausen needs finds a place in this story nonethless.

If now an attempt is made to present the history of multimedia in three stages, it is also important to note that there are of course smooth transitions and overlaps. While, for example, the beginning of interactive music is given as the year 2000, one should be aware of the fact that the origins of interactivity go back to the first beginnings of computer music and that important interactive pieces, for example those by Michel Waisvisz were realized as early as the 1980s.

SPECULATIVE STAGE

Development of cybernetics and information theory: Norbert Wiener

Information Theory

The year 1945 marked a new beginning in the history of the Western world. Fascism and nationalism have plunged almost the entire civilized world into a war of annihilation, to which the United States responded with an unparalleled rearmament. In the course of this mobilization, the US Army Ordnance started to develop the ENIAC (Electronic Numerical Integrator And Computer) in 1943. At the same time, Leo Szilárd (1898-1964), John von Neumann (1903-1957) and Norbert Wiener (1894-1964) laid the mathematical foundations of information theory. It was Wiener who invented the word “cybernetics” in 1947 as an umbrella term for the science of the “communication and control in the animal and the machine”.

Nobert Wiener: Men, machines, and the world about them
The cover of  Nobert Wiener's groundbreaking book Cybernetics: or the Control and Communication in the Animal and the Machine
The cover of Nobert Wiener's groundbreaking book Cybernetics: or the Control and Communication in the Animal and the Machine

The ENIAC computer
The ENIAC computer

We should note that the ENIAC, completed in late 1945, was by no means the first digital computer: The Z3 computer built by Conrad Zuse was finished four years ealier but never put to use.

The Zuse Z3 computer
The Zuse Z3 computer

Parallel to the spread of cybernetics in the exact sciences, principles of quantifiable entities increasingly influenced artists and their creations. In 1948, Swiss architect and painter Le Corbusier published his first draft of the Modulor, an anthropometric scale of proportions based on the Golden Mean and in 1947 the mathematician and composer Milton Babbitt wrote “Three Compositions for Piano” considered the first example of total serialism. In 1949, Olivier Messiaen composed the piano pieces “Mode de valeurs et d’intensités” and “Cantéyodjayâ” influenced by Anton Webern, whose music had laid the ground for this direction — a direction that was eventually taken up and further developed by the young composers of the early Darmstadt School (Pierre Boulez, Luigi Nono, Karlheinz Stockhausen, and others).

Oliver Messiaen: Cantéyodjayâ

Application of Information Theory to Music

Both approaches, information theory and serial music, have in common a thinking in quantifiable, statistically detectable units of information. Iannis Xenakis consequently introduced the concept of stochastics into musical discourse in his 1955 critique of “linear counterpoint”. It should be noted that the technological and quantitative thinking in arts didn’t just pop up during the middle of the 20th century: The Bauhaus founded in 1919 in Weimar can be considered a precursor to much that followed and the invention of dodecaphony conceived by Hauer in the same year and revised by Schoenberg four years later is another example for how the zeitgeist operated on different art forms.

In the 1950s, information theory, as a very young field of science, raised numerous questions in its heuristic application to music. In retrospect, many of these assumptions were utopian and hardly verifiable; one can therefore call this stage in computer music, which lasted roughly until 1970, the “speculative stage”. Without extensive knowledge of music perception and cognition, which at the time had hardly developed beyond the results of Helmholtz’s 19th century research, information theory seemed to at least give composers a tool with which to organize the almost endless possibilities of electronic, as well as acoustic, music.

The American pioneer of computer music Lejaren Hiller (1924-1992) gave two lectures on information theory and computer music at the International Summer Courses for New Music in Darmstadt in 1963. His concept of musical information density was informed by statistics and the mathematical foundations of communication theory. Hiller’s diagrams comparing the information density of select compositions from Mozart to Hindemith (measured in bits per second) seem strange from the distance of 40 years, but they corresponded to a widespread way of thinking at that time.

Measurement of information content of select pieces according to Lejaren Hiller
Measurement of information content of select pieces according to Lejaren Hiller

The first computer-generated scores

Lejaren Hiller (together with Leonard Issacson) is also credited for having created in 1957 one of the first computer-generated compositions: This composition for string quartet is called Illiac Suite, as it was calculated on the Illinois Integrator and Numerator computer.

Excerpt from the Illiac Suite by Lejaren Hiller & Leonard Issacson
Excerpt from the Illiac Suite by Lejaren Hiller & Leonard Issacson

Music From Mathematics

Yet, the first piece of music ever composed by a computer in 1956 was the melody for the popular music song “Push-button Bertha” by Martin Klein and Douglas Bolitho (albeit with a very human harmonization).

Push-button Bertha by Martin Klein and Douglas Bolitho
Push-button Bertha by Martin Klein and Douglas Bolitho

John Cage

John Cage (1912-1992) was a protagonist of the 20th century avant-garde whose work had an enormous impact of nearly all art forms. His interest in using electronics goes back to the 1939 composition Imaginary Landscape No. 1, calling for two variable-speed phono turntables playing “frequency recordings”.

1952: Williams Mix

Cage’s first composition for tape pushes the limits of the medium. While fully realized in an analog medium, Cage’s approach of using short samples, some just lasting just a few milliseconds, presages modern computer editing techniques. Williams Mix features

“A score (192 pages) for making music on magnetic tape. Each page has two systems comprising eight lines each. These eight lines are eight tracks of tape and they are pictured full-size so that the score constitutes a pattern for the cutting of tape and its splicing. All recorded sounds are placed in six categories … Approximately 600 recordings are necessary to make a version of this piece. The composing means were chance operations derived from the I-Ching.”

The only realization of this score until 2012 is by Cage himself. Despite the support of Earl Brown and David Tudor in cutting and gluing the tapes, it took about a year to complete the complex sound montage, which is only four minutes long. The piece is performed with four stereo tape recorders and eight loudspeakers. In 2012, Tom Erbe became the first person to recreate “Williams Mix” from the original score, entering each tape edit from the score into the computer, and creating performance software carefully following Cage’s notes.

Score of Williams Mix (1952)
Score of Williams Mix (1952)

1969: HPSCHD

In 1969, Cage and Lejaren Hiller organized a spactacle which featured 7 harpsichords playing extracts of music by Cage and classical composers, 52 cassettes of computer-generated sounds, 6400 slides projected by 64 projectors and 40 films. It has been described as:

“Arguably the wildest composition of the 20th century. Big, brash, exuberant, raucous, a performance is about four hours of ongoing high-level intensity. The sound is a mixture of seven amplified harpsichords playing computer-generated variations of Mozart and other composers along with 52 computer-generated tapes playing what could be off-tuned trumpets sounding some musical charge. The thousands of swirling images, overlayed and mixed, of abstract shapes and colors and of space imagery from slides and films borrowed from NASA, create a chaotic riot of shifting form and color. The audience walks through the performance space, between the harpsichord players, around the loudspeakers.”

Original recording of the first part of HPSCHD
2013 production at Eyebeam Art and Technology Center

Xenakis: Formalised Music and Multimedia

Iannis Xenakis (1922-2001), who for a while also worked as an architect under Le Corbusier, was certainly one of the most celebrated computer composers. He drew his inspiration, ancient Greek mythology aside, from complex geometric figures and statistical procedures known as stochastics. In “Formalized Music: Thought and Mathematics in Composition” published in 1971 and in some ways the counterpart to Messiaen’s “Technique de mon language musical,” Xenakis describes the mathematical prerequisites for his work, his thinking in terms of densities and probabilities, which is also motivated by his early criticism of total serialism as an example of a utopian strive to subject all musical parameters to the same laws - and at the same time failing to do so since the cognitive processes operating on the outcomes impose their own laws which may run counter to its original intent.

Iannis Xenakis
Iannis Xenakis

Xekakis’ ST/n compositions such as ST/4 for string quartet and ST/48 for large orchestra represent his effort to use computers to calculate stochastic processes and to transcribe them into music.

In 1958, Xenakis contributed to a large scale multimedia project, The Poème électronique shown at the Philips Pavilion on occasion of the World Expo in Brussels. His were the elaborate design of the building, a 3D structure called a hyperbolic paraboloid and also a short musical sequence with granular sounds (named Concret PH). His mentor Le Corbusier who was credited for the architecture (!) contributed the art work while Edgard Varèse (1883-1965) composed the majority of the sounds.

In his book “Space Calculated in Seconds” the UC Berkeley architecture professor Marc Treib writes about the project:

“Poème électronique” (electronic poem) is the first electronic-spatial environment that combines architecture, film, light and music to create an overall experience that fuses space and time. Under the direction of Le Corbusier, Iannis Xenakis designed the concept and geometry of the World’s Fair building according to mathematical functions, while Edgard Varèse composed the mixed concrete and vocal music to accompany the dynamic light and image projections conceived by Le Corbusier. Varèse had long striven in his music for abstract, partly visually inspired concepts of form and spatial movement. For “Poème électronique,” he used machine noises, transposed piano chords, filtered choral and soloist voices, and synthetic timbres, among other things. With the help of advanced technology provided by Philips, the sounds of the tape composition could be moved along complex trajectories in space. “The Philips Pavilion presented a collage-like litany of 20th century man’s dependence on electricity rather than daylight, on virtual perspectives rather than vistas of the earth.”

Hand-colored postcard of the Philips Pavilion
Hand-colored postcard of the Philips Pavilion

Describing the Ineffable: Le Corbusier, Le Poème Electronique and Montage by Açalya Kiyak

The Virtual Electronic Poem (VEP) project, co-funded by the European Union through the Culture 2000 programme, realized a virtual reality (VR) environment capable of reproducing the global experience of the Poème électronique through a philologically accurate reconstruction of the original installation and a technologically innovative VR implementation.

How Time Passes: Stockhausen

In Germany, the phonetician Werner Meyer-Eppler (1913 - 1960) exerted a major influence on composers such as Herbert Eimert (1897-1972) and Karlheinz Stockhausen (1928 - 2007). Meyer-Eppler was concerned with the fundamentals of information theory and its application to phonetics and speech synthesis. Although Karlheinz Stockhausen in principle did not use computers for composing, his thinking was strongly influenced by information theory and the formalization of musical processes. His main theoretical work “Wie die Zeit vergeht” (How Time Passes) deals with the phenomenon of musical time as a continuum from the frequency of a single tone to the large-scale form of opera. The “Elektronische Studie II”, composed and realized in 1954, is still entirely in the tradition of total serialism. A real-time realization (created in 2000) by the author of this unit is included in the multimedia programming environment Max.

Elektronische Studie II by Karlheinz Stockhausen in a realization by Georg Hajdu

Other notable electronic and mixed media works by Stockhausen from the 1950s include Gesang der Jünglinge (1956, first combination of electronic sounds with sampled sound) and Kontakte (1960, mixed media: electronic sounds combined with acoustic instruments). The later composition also served as the musical basis for a Fluxus piece called Originale (see unit 1).

Gottfried Michael Koenig

A companion of Stockhausen in the late 1950s is Gottfried Michael Koenig (b. 1926), who worked at the Studio for Electronic Music in Cologne 1954-64 and as a staff member and director of the Institute for Sonology in Utrecht 1964-86 (the institute was reestablished in The Hague in 1986). There he developed the computer programs Projekt 1 to 3 and SSP. The following explanation can be found on Koenig’s homepage. It is exemplary for the kind of thinking prevelant at the time:

“The SSP Sound Synthesis Program was written shortly after the Institute of Sonology at Utrecht University obtained its own computer, in 1971. It was designed to aid the experimental exploration of a concept that exceeded the possibilities of electronic music produced in analog studios: the representation of sound as a sequence of amplitudes in time. It seemed an attractive proposition to make use of the methods of aleatoric and groupwise selection of elements employed in “Project 1” and “Project 2” for the purpose of filling the space between stationary and noise-like sounds. This approach is not based on an acoustic model (on imitating familiar sounds), but must rely on the empirical finding of previously unknown sounds by systematically permutating the elements of an initial situation. Since the initial positions can be catalogued, the assumption was that a systematic description of the resulting sound would also be feasible.”

Max Mathews: The first computer-generated sounds

Max Mathews (1926-2011) is often referred to as the father of computer music and the inventor of the Radio-Baton. A trained engineer, he first worked at Bell Lab before joining the Stanford University faculty in 1987. He described the development of computer music with his own succint words:

“Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wrote Music 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. The IBM 704 and its siblings were strictly studio machines–they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable”. from: Horizons in Computer Music,” March 8-9, 1997, Indiana University.

AT&T Archives: Incredible Machine

EXPLORATORY STAGE

The Computer as a Young Artist

In 1965, John R. Pierce (1910-2002) expressed his reverence for (and frustration with) the computer as a musical instrument in “Portrait of the Computer as a Young Artist” with the following statement:

“As a musical instrument, the computer has unlimited potentialities for uttering sound. It can, in fact, produce strings of numbers representing any conceivable or hearable sound… Wonderful things would come out of that box if only we knew how to evoke them.”

Around the same time, a certain disenchantment with the prevailing serial method of composition also spread a feeling that Adorno summed up in his 1966 article “Difficulties”: It seemed that something fundamental had been lost in most compositions of the time.

Nevertheless, a return to old traditions seemed out of the question, so instead further efforts were to be made to explore the properties of musical and acoustic phenomena and the mechanisms of their perception. Consequently, at the beginning of the 1970s, interdisciplinary research centers were founded with the intention of having engineers, psychologists, computer scientists and composers work on joint projects. Of these centers, the “Center for Computer Research in Music and Acoustics” (CCRMA), founded in 1975 by the composer John Chowning and the computer scientist Julius O. Smith at Stanford University, and the Paris “Institut de Recherche et Coordination Acoustique/Musique” (IRCAM) were the firsts to open their doors.

The Gravesano Studio

The first idea of a center dedicated to the research of music and multimedia at the crossroads of art, technology and science goes back to conductor and pianist Hermann Scherchen (1891-1966) who founded a studio in Gravesano, Switzerland. He also edited a journal called Gravesaner Blätter to which between 1955 and 1966 some of the great minds of his time such as Boulez, Pierre Schaeffer, Xenakis, Moles, Trautwein, Le Corbusier, Adorno, Meyer-Eppler, Tenney as well as Mathews and Pierce have contributed to.

Bell Labs

John R. Pierce (1910-2002) was an American engineer and author who wrote on electronics and information theory, and developed jointly the concept of Pulse-code modulation (PCM) with his Bell Labs colleagues Barney Oliver and Claude Shannon. He supervised the Bell Labs team which built the first transistor and gave it its name. He also lend his own name to the Bohlen-Pierce scale which he co-discovered. At Bell Labs, he was known to give his co-workers, Max Mathews among them, freedom to pursue odd projects without an explicit goal to create products for commercial exploitation. They had a position for a “research composer in residence” which gave James Tenney (1934-2006) and Jean-Claude Risset (1938-2016), a French physicist, pianist and composer, the opportunity to pursue their research interests. Jean Claude Risset later became the head of the Computer Department at IRCAM (1975-1979) and composed his the first Duet for one pianist (1989) at the MIT Media Lab.Bell Labs was responsible for a few computer graphics and music firsts:

The Bell Labs were noted for the following accomplishments in the area of computer art:

  • 1961: computer performs “Daisy Bell” with music programmed by Max Mathews and speech programmed by John Kelly and Carol Lockbaum. This was later the inspiration for the computer “HAL” singing the song in the movie 2001. Daisy Bell was also the nickname of one of Alexander Graham Bell’s daughters.
  • 1962: The first digital computer art was created at Bell Labs by A. Michael Noll.
  • 1963: The first computer animated film was produced by Edward Zajac at Bell Labs.
  • 1963: The first computer animation language, BEFLIX, was created by Ken Knowlton.
  • 1966: first ASCII art was created by Ken Knowlton.

IRCAM

The foundation of IRCAM was decided by President Pompidou as early as 1970 on the initiative of Pierre Boulez (1926-2016), who was also to become the institute’s long-time director. After a planning phase lasting many years, it began operations in 1974. It is a historical fact that Boulez, who lived for a time in Baden-Baden in German “exile,” had initially approached the physicist and Nobel Prize winner Werner Heisenberg with his idea of an interdisciplinary music research center. But Heisenberg, then director of the Munich Max Planck Institute for Physics and Astrophysics and an outstanding figure in German scientific life, had already decided in his youth against a career as a composer and approached Boulez’s idea with “uncertainty”. Thus, the opportunity arose to establish an institute in the center of Paris, excellently equipped both financially and in terms of personnel. Accordingly, the list of researchers and musicians who have spent some time at IRCAM is extremely impressive.

The realization of numerous compositions at IRCAM, for which - as for Boulez’ “Répons” (1986) - new software and hardware was developed, is symptomatic of the successful collaboration between musicians and scientists, but also of the increasing academization of computer music and multimedia.

Of the composers working at IRCAM, Grisey (1948-1998) and Murail (* 1947) stand out in particular as founders of the compositional technique known as “spectralism”. Together with Roger Tessier and Michaël Levinas, they founded the ensemble L’Itineraire as early as 1973, which premiered numerous spectral compositions. The basis of spectral composition is the Fourier analysis of sounds and the transcription of the partials contained in the spectra to acoustic instruments. The spectralists were soon joined by the British composer Jonathan Harvey (1939-2012), who in 1980 realized his tape music piece “Mortuos Plango, Vivos Voco”. The software Max (named after Max Mathews) was invented by Miller Puckette (* 1959) a mathematician and computer musician working at IRCAM in the 1980s under the supervision of David Wessel.

Photo of the adminstrative building of IRCAM
Photo of the adminstrative building of IRCAM

The Composer as Researcher: Clarence Barlow

Another former IRCAM resident is Clarence Barlow (* 1945) whose work epitomizes the explorative stage of computer music: the composer being a cognitive researcher, music theorist and software programmer all at the same time. In this way, incidentally, he resembles James Tenney, who, as it was stated earlier, was involved in experiments at Bell Laboratories with Max Mathews and later worked on cognitive and historical aspects of tuning and tonality. Barlow studied with Bernd Alois Zimmermann (1918-1970) in 1968-70, and the following two years with Karlheinz Stockhausen. He worked at various European studios, including IRCAM, where he realized his groundbreaking piano piece “Çoğluotobüsişletmesi” (1978). In his book “Bus Journey to Parametron” (1980), interspersed with witty puns, Barlow describes in detail the music-theoretical premises that govern the composition of the 30-minute work. A plural understanding of music history characterizes Barlow’s aesthetics, bringing him close to Umberto Eco’s theory of postmodernism. Instead of the tabula rasa of serial composition, his thinking is characterized by allusions and an effort to formalize historical phenomena such as meter, tonality, and style, while bringing his theory in line with the cognitive research of a Roger Shepard. In an interview with Gisela Gronemeyer, for example, he says: “In Çoğluotobüsişletmesi, I was immensely pleased to see that very many passages of music quite unintentionally resemble certain musical traditions of the world. There is Balinese music in it, Gregorian music, jazz in different variations. Also a bit of rock music in a certain place, and of course classical and romantic music, although I only wanted to capture one principle in the music. I wanted to write a really romantic piece, but I had no idea that my theory, my system would give me access to such styles”.

Screenshot AUTOBUSK by Clarence Barlow, considered one of the first real time, generative music applications in the history of computer music. It was based on the research performed while working on Çoğluotobüsişletmesi
Screenshot AUTOBUSK by Clarence Barlow, considered one of the first real time, generative music applications in the history of computer music. It was based on the research performed while working on Çoğluotobüsişletmesi

CCRMA

The establishment of IRCAM was aided by Max Mathews and John Chowing (* 1938) who co-founded the “Center for Computer Research in Music and Acoustics” (CCRMA) at Stanford University opening its doors in 1976. incidentally, Chowing was also involved in a failed attempt to establish a computer music research center at the Hamburg University of Music and Theater in 1973 on behalf of György Ligeti. Chowning is noted for his work in sound spatizaliation and the invention of FM synthesis, which made his university more than $25 million in licensing fees.

Photo of John Chowning in front of a Yamaha synthesizer using the synthesis technique he invented
Photo of John Chowning in front of a Yamaha synthesizer using the synthesis technique he invented

THE INTERACTIVE STAGE

The Digital Revolution is over

In 1985, another research center opened its doors: The MIT Media Lab. Accordings to its own website, “many facets of the digital revolution can be traced back to ideas from the Media Lab.” One of its prominent figures is American composer Tod Machover (* 1953) who started the hyperinstrument project in 1986, with the “the goal of designing expanded musical instruments, using technology to give extra power and finesse to virtuosic performers.” The architect Nicholas Negroponte (* 1943) who served as director of the MIT Media Lab between 1985 and 2000 and had become involved in the creation of the WIRED magazine. Negroponte expanded many of the ideas from his WIRED columns into a best-selling book, Being Digital (1995). He is noted for his statement “Like air and drinking water, being digital will be noticed only in its absence, not by its presence. Face it - the Digital Revolution is over”.

This paradigm shift is characterized by the blurring of identities and functions mediated by the ubiquity and permeation of the Internet: producers and consumers of content coalesce into the single role of a prosumer. In computer music and multimedia the separation between composition and improvisation in live performance is abolished, just as the distinction between performer and audience in the case of interactive installations. New terms emerge: Participatory music, net music, real-time composition, autopoiesis (Greek autos = self and poiein = to make, thus actually “making oneself”), streaming, autogenerative processes, etc. A new type of composer is emerging who, due to the general availability of affordable hardware and sophisticated software, is able to work intuitively and interactively without profound knowledge of music theory or computer science. The fact that this paradigm shift, which is also accompanied by social upheavals, was decades in the making will be shown in the following and illustrated with some examples.

The Global Village

The statements formulated in “Understanding Media” (1964) by Marshall McLuhan (1911-1980) about our world as a global village have become commonplace in the intellectual discourse: “Today, after more than a century of electric (today one would rather say “electronic”) technology, we have extended our central nervous system in a global embrace, abolishing both space and time as far as our planet is concerned”.

In fact, it was not until 30 years later, with the proliferation of the hypertext-based World Wide Web and the associated expansion of the Internet, that the implications of this became generally noticeable. The decisive advantage of the Internet over traditional media is, on the one hand, the so-called back channel, which allows the recipient to participate in communication processes, and, on the other hand, its broad distribution, which makes it possible to turn even economically less potent users into global providers of information in the broadest sense.

Composers who took up the memes of globalization and networking rather intuitively reacted with concepts that first used telephone circuits and later satellite technology to merge space and time, such as Max Neuhaus or Bill Fontana. In Alvin Curran’s “Crystal Psalms” (1988) and Larry Austin’s “Canadian Coastlines: Canonic Fractals for Musicians and Computer Band” (1981), musicians were time-synchronized together during performances via satellite or landlines.

In the San Francisco Bay Area, computer-enthusiastic composers got together at the end of the 1970s and organized the first concerts with networked computers in 1978, calling themselves “The League of Automatic Music Composers” from 1980. The group “The Hub” emerged from this formation in 1987 and included John Bischoff, Tim Perkis, Mark Trayle, Chris Brown, Scot Gresham-Lancaster, and Phil Stone.

Members of The Hub posing with their controllers
Members of The Hub posing with their controllers

While the possibilities of networked compositions were initially limited to local performances through the use of MIDI networks-where the technology originally developed for communication between musical instruments and computers, called Musical Instrument Digital Interface, or MIDI, was used to exchange data between computers-the 1990s saw the creation of broadband networking technology based on TCP/IP (Transmission Control Protocol/Internet Protocol), which was used to realize Web-based compositions beginning around 1996.

In Austria, a forum for digital and interactive art was created in 1987 with the “Prix Ars Electronica” in Linz. Perhaps it is therefore no coincidence that the first European production of “Brain Opera” (1996) by MIT Media Lab professor Tod Machover was also shown in Vienna, and that its interactive installation found a permanent home in Vienna’s Haus der Musik. The “Brain Opera” reflects like hardly any other work of the time the aspects of multimedia, interactive, participatory and networked music, which allows the audience to determine its course by means of so-called hyperinstruments.

A Few Words

A few words about my own work. As a student of Clarence Barlow’s and a PhD student working at CNMAT under the supervision of David Wessel my formative years as a computer composer and multimedia artist were affected by the paradigm shift between the exploratory and interactive stages. My opera “Der Sprung - Beschreibung einer Oper” (1994-98) is characterized by an intensive research period during which I used novel software for spectral analysis and derived the entire structure of the opera from a short audio sample. Yet, in its Intermezzo which requires three keyboard players as well as eight singers performing an audio score generated by a computer in real time, the focus is laid on the interaction between the musicians and technology. Another scene from the opera epitomizes the changing functions that characterized the new stage: The singers react to the computer performing a passage derived from the spectral analysis of speech by assuming the role of the audience by laughing and shouting. The opera was premiered in October of 1999, the same year I created the first draft of Quintet.net, a networked multimedia performance software which was consequently used in a Munich Biennale opera production called Orpheus Kristall. In this work, my software serves to open a window to the busy and hidden “underworld” of the Internet: Musicians in five locations around the globe interact, to use McLuhan’s terms, as if time and space were abolished.

The Postdigital

In the Postdigital we observe an explosion of technological possibilities and artistic expressions; much of it will the subject of the upcoming presentations. A comprehensive overview will be given by Alexander Schubert in the next unit.

( et al., ).

References

Authors

  • Georg Hajdu

Topics

  • The Speculative Stage
  • The Exploratory Stage
  • The Interactive Stage

Contents