a lecture in the YNEZ Series
Advances in Music Notation
at
CREATE
Department of Music
University of California
Santa Barbara, CA 93106 USA
Thursday, 28th February 2002 at 6pm
with additional footnotes:
March 2002
This lecture expands upon the proposals I am making for a Graphic User Interface for developing
music. The genesis of these ideas can be found in the previous papers to be found at this
web site.
Part 1 of this lecture describes my view of the crucial problems in 20th century music notation
and how these affected both composers and institutions over the past century. Part 2 is a
description of how the problems can be tackled using software. I present a proposal for a
GUI which incorporates a new paradigm for music notation, replacing the 19th century concepts
currently embodied in standard music notation.
1.2 The general
context - inherited problems
1.3
A summary of the current technical problems
2.1
Social aspects of the proposals
2.2 A Graphic
User Interface
2.3 Further Notes
First I'd like to thank Curtis Roads very much for inviting me to speak here today. I have
a very special background, and dont really fit into any of the usual bureaucratic categories
- I'm neither an academic nor a practising composer, nor a technician in the traditional sense,
nor do I have much experience of lecturing. So this talk is not without its risks. Thank you,
Curtis, for asking me anyway.
Part of what I have to say today has to do with the way the meanings of symbols change according
to their context. For example, you will not really be able to understand what I am about to
say, unless you know something about who I am, and how I understand our shared cultural context.
So here's the general form this talk is going to take:
Part 1.1 deals with the special context in this room: I know, generally speaking, who you
all are from the information presented at CREATE's web site and from what I've heard from
Curtis Roads. But it is even more important that everyone here knows something about me, and
I cant assume that you have all looked at my web site. So I'm going to read you a short description
of my personal background.
Part 1.2 deals with the general context: This is the relevant part of the inherited culture
of written music which we all share with the rest of the western world. Here again, I'm going
to read this part of the talk. I'll be dealing with a very high level perspective on the history
of written music over the last hundred years or so, and there would be a great danger of my
losing my thread if I tried to improvise this. There are simply too many interesting side-tracks,
and I really want to get on with Part 2 rather than getting bogged down in historical details.
This section is not intended to be a definitive, potted history of 20th century music. It
is supposed to give you an idea of the kinds of problems I'm trying to solve in the next section.
Part 2 will be longer and less rigid - this is where I'm trying to change the context I've
just described. I'm going to enlarge on my previous proposals for the structure of a Graphic
User Interface for editing music on computer terminals, and especially want to describe how
this can work at the lowest level of percieved events. I want to talk about the the kinds
of symbols which can be used at this level, how their meanings can change according to context,
how a hierarchy of symbol-levels gets off the ground, and how all this can be expressed in
real software.
Part 3 I would be very happy, if this talk could end with an open discussion of the topics
presented. Individuals, like myself, working outside “the system” need feedback.
Culture is teamwork - by definition. I have brought some notes about topics I think of as
“growth-points”.
I'm a freelance copyist and developer. I'm also a composer — I studied under Harrison
Birtwistle at the Royal Academy of Music in London from 1968-1972.
I left college wanting to tackle the serious theoretical problems in music notation which
had been left unsolved when the Avant-Garde collapsed in 1970.
The traditional institutional structures had proven incapable of solving these problems: Firstly,
composers were expected to solve them, but composers have too many other problems to solve
at the same time. They need to think about the poetic aspects of what they are doing, and
their survival depends on their mastering short-term practicalities. Secondly, the other institutions
(academic, government sponsoring, publishers etc.) only support rather short-term projects.
So I looked about for (and was lucky enough to find) a long-term strategy which would enable
me to remain close to the things which interested me.
After working for Universal Edition in London for a couple of years, I became Karlheinz Stockhausen's
principal copyist in 1974, and made an agreement with him which enabled me to get several
months in each year free to devote to my own work.
Unfortunately, at the beginning of 2001, Stockhausen told me that he no longer wanted to continue
with that agreement.
My work and the software I use
OK, so I'm a practical copyist with many years of experience creating complicated 20th century
scores. Here are the relevant parts of my curriculum vitae in a little more detail.
1974-1993: Handwritten
scores
For the 19 years before Stockhausen began to use computer technology to publish
his scores, I copied them at a drawing board with pen and ink.
1980: My first personal
computer (Apple II) - and my first experience of programming
I bought this computer because I wanted to reduce the amount of paperwork associated
with an Abstract Data Type I was developing. I'm interested in algorithmic composition.
1993: Stockhausen starts using computers to publish
his scores - Finale, FreeHand and Freehand Xtras...
In 1993, the then current version of Finale was hopelessly inadequate for the problems
confronting us, and I had to find some answers pretty quickly in order to keep my job. I was
already programming in C, and was thus able to write a special filter to improve Finale's
postscript output enough for that to be edited economically in FreeHand.
FreeHand is a very powerful graphics editor, used routinely in the advertising industry. Its
a great program, with lots of money behind it - but it doesnt know anything about music, so
I began writing Xtras for it when that became possible in 1995.
I have never had to deny Stockhausen any of his most extreme demands as the result of inadequate
software, and have brought a few example scores with me, if anyone wants to look at these
later.
2001: Freelance copyist
- ... and Sibelius
I'm currently looking about for new ways to finance my research & development
projects. The commercial music publishing world is at present dominated by two major programs:
Finale and Sibelius, and I have decided to learn the latter. I might just as easily have chosen
the latest version of Finale. Probably I was irrationally influenced by the fact that Sibelius
is a British program :-). Both these programs meet the minimum graphic requirements for publishing
conventional music, and they can both be used to input such music in a commercially viable
time - but, especially in non-conventional music, neither of them provide the precision, power
and flexibility achieved by combining FreeHand with my Xtras.
As some of you may know from my web site, I think that written music has stopped developing
and that something needs to be done about this. As far as I'm concerned, the lack of development
results from the failure of the arts world to come to terms with the collapse of 19th century
dualistic thinking, and I think we can get things moving again if we look these problems squarely
in the eye. It has always been my dream to find a way to get music breathing again, and think
that goal may not be too far away.
At the end of the 19th century, common sense (especially among artists), was that the physical
(=mechanical) world had been essentially explained, but that a parallel, spiritual world existed,
in which Art happened.
This underlying dualism affected developments in two important aspects of music: firstly,
the theory of music notation, and secondly music's institutional structures. Interestingly,
those institutions have also failed to solve the problems we are now facing.
In the theory of music notation the dualism becomes:
Real time = absolute time + expressivity
Socially, the dualism leads to a perception of artists as “heroes” who transcend
the dead, material world. Over the years, this role-model for artists has led to the establishment
of institutions for the promotion of such heroes. The reasons for doing this are becoming
ever less clear. The institutions remain part of the problem we have to solve.
I'd like to include both of these problems - the technical and systemic - in what I'm going
to say, because both of them are related to the main object of this talk - the software proposals
I am currently making. Software not only embodies solutions to technical problems, but it
also interacts with, and changes the sociological structures within which it is used.
Remember, that this is not supposed to be a potted history of music's notation and institutions
:-). This is just my version of the history of the crucial problems. Once you have understood
this context, you will be better able to understand what I'm trying to get at with the solutions
I'm proposing.
Lets go back to the 19th century. Here’s the above dualism again:
real time = absolute time + expressivity
or
performed time = mechanical time + performance practice
Here's a list of reasons why I think problems arose. I'll go through these quickly and then
give you a couple of real examples:
Contemporary common-sense
was wrong: time philosophy and the resulting notation conventions had been
stagnating since the 17th century.
Space-Time equivalence:
Time was thought of as being equivalent to a dimension of space. The typographical (space)
conventions (subdivision symbols, making bars add up etc.) are directly linked to the meanings
of the symbols in time, and these meanings are linked to low level, absolute, mechanical time.
Where this is not the case (e.g. in cadenzas) the typographical rules are not properly defined
at all.
Ubiquity of tempo:
Standard music notation assumed the existence of tempo, and real music became increasingly
tempoless. Even now, at the beginning of the 21st century, there is still no theoretically
sound symbolic notation for tempoless music.
Unlimited expressivity:
There are no limits to what is meant by expressivity. Writers of music lose control, and performers
end up doing more or less what they like.
High level structures
not explicitly notated: The notation conventions lack a concept of “level
of information”. The concepts “chunking” (subroutine) and “level of
information” were developed in the late 20th century in connection with computer languages.
However, 19th century notation does indeed use symbols having different levels of meaning:
Trills and other ornament symbols replace groups of notes, Roman numerals stand for chord
functions - but there are limits to the number of levels one can cram onto a single, two-dimensional
sheet of paper. Notice that the interpretation of ornament symbols was a particularly thorny
problem for researchers developing performance practices for ancient music during the mid
20th century.
Here are a couple of concrete examples to illustrate what I mean:
The first example is of typically flexible, slow moving, Romantic music: Notice that it breathes
- something which got lost in the 20th century.
play example #1 from CD :
Bruckner 7th Symphony, slow movement, bars 1-9
The lengths of these notes are judged in relation to the lengths of other notes in the vicinity
(the local context). The lengths of the notes are neither related to time segments produced
by some mechanical measuring device, nor are they related to some persistent tempo. At these
orders of magnitude, it is very difficult to remember any tempo at all. This music has nothing
to do with relations to absolute time - its about building structural arches at very high
levels, and those high levels are not directly notated in the score. One finds out where they
are, and how to perform them, by learning a particular performance practice - i.e. becoming
a member of a specific performing tradition.
The second example - also typical of late Romantic music - also contains faster music, - clouds
of notes which smear the concept of simultaneity. There are, of course many examples of this
kind of thing in other composers - for example Richard Strauss - but here is early Schoenberg.
I'd like to play nearly four minutes of this example, to give you time to think.
play example #2 from CD
Schönberg's Pelleas & Melisande,
from rehearsal number 43 “Ein wenig bewegter”
to the fermata before rehearsal number 50.
In the Universal edition score, this is from page 75
to the middle of the lower system on page 91.
Clock time is irrelevant within such clouds, just as the sequence
of characters in a word is irrelevant to the meaning of that word. Clock time is flat. Meaning
depends on higher levels of information. Clock time no longer has anything to do with what
the composer or performers mean.2
In both these examples it becomes difficult to see why one should stick to the conventions
of standard music notation - for example making bars add up in absolute time. Symbols for
“subdivisions” - so called “irrational” values - are also meaningless
in tempoless music. I'll come back to this when I get to the 1950s.
So late 19th century notation conventions ceased to describe the
things composers wanted to describe. They became unable to write what they meant.3
It had, of course, always been the case that notation represented an approximation, but the
problem now became acute. Composers wanted to write higher levels of structure than the notation
on the two dimensional page. Incidentally this leads to the notation of music becoming increasingly
work-intensive - like writing computer programs without the use of subroutines. This is a
very practical problem, which shouldnt be underestimated.
In this situation, concientious performers think of themselves as being in a tradition of
unnotated “performance practice” (e.g. the school of pianists which begins with
Chopin).
Before leaving the 19th century, I'd like to emphasise the following:
The 19th century common-sense
time paradigm had failed.
There are interesting parallel developments in physics (Einstein) at the beginning of the
20th century. The lessons learned from that paradigm shift have not yet found their way into
the theory of music notation. If anyone would like to go into that in more detail, we can
do so later - but I must not get distracted here.
In practice, even in music
which has a tempo, local time is all important.
The durations of notes are judged relative to the durations of other notes in the vicinity,
not to some absolute, mechanical duration measured by a stopwatch or metronome. Absolute time
is actually an unnecessary concept in music written with the standard symbols. If we want,
non-dualistically to say that the symbols mean the corresponding performed notes, then we
have to admit that the symbols dont have absolute meanings at all. The meanings change according
to their local context.
The whole point of performing
music in real time, is to cultivate a tradition of performance practice by forming real events.
Symbols on paper have never told the whole truth about music. Attempts to deny the existence
of performance practice, stored in human memory, quickly become uninteresting. Music is about
performances in context. It is about memory not machines. Information, not matter. Software,
not hardware.
A sociological aspect of this: The increased reliance on performance practice leads to an
increased dependence on soloists and conductors. This is especially true in new pieces. It
takes a great deal of rehearsal time to establish a new performance practice tradition. Because
performers and conductors are the people who actually give concerts which generate revenue,
they come to dominate the institutions.
Written music continues with neo-classical time, the sun finally sets on late-Romantics (such
as Richard Strauss). The beginnings of a recording industry. The rise of Jazz.
Invention of commercial
recording
This has practically no effect on the written tradition, but is of crucial importance to the
development and dissemination of Jazz and popular music. Recordings are canned performance
practice. Jazz musicians work in small groups, and have a practical, undogmatic attitude to
notation. Some inimitable performers become world famous: Think of Caruso, Bessie Smith, later
Edith Piaf, Frank Sinatra. Recordings create globalised aural traditions, enabling music to
develop by imitation, without reference to notation problems.
“Performance practice”
is only vaguely understood
and is certainly not integrated into any notation theory. Composers (writers of music) try
to pretend that performance practice doesnt exist (Remember Stravinsky's dictum “Just
play what I've written - no more, no less.”). They are highly suspicious of “virtuosos”.
For economical reasons (rehearsal time), they can only write music which has a Neo-Classic
or Romantic performance practice. They repress the problem of time in favour of more immediately
accessible topics (e.g. pitch). Composers have to write.
Sociological developments
(Remember that the Romantics believe that there is a sublime spiritual world, complementing
the dead material world, and that this spiritual world is the only worthy environment for
the human spirit.) In politics, the Romantic tradition loses two world wars. Kaiser Wilhelm
and Hitler both ignored the material consequences of trying to realise their Romantic visions.
(In the bunker under Berlin in 1945, Hitler actually said: “The German people are not
worthy of my dream.”) Since that time, especially in Germany, modern liberal democracies
regard charismatic politicians with great suspicion.
But composers, institutions and the general public continued to regard artists as heroes.
This is particularly so in popular culture (this is the classic era for film stars...). Conservative,
inherited attitudes persisted in institutions such as orchestras - while new music becomes
increasingly problematic. Neo-Classicism is the only practicable alternative to continued
Romanticism. In its denial of “expressivity” it is fundamentally at odds with
the institutions which grew up to support heroes. Extreme Romantics finally lose control altogether.
Most composers arrive at some kind of compromise.
Sociological developments
(continued)
In Germany and France, the situation is especially interesting: Stockhausen and Boulez, as
young, hardworking, free-thinking composers become the focus of an attempt to transcend the
past. The Zeitgeist is summed up in the term “Stunde Null” - Zero-Hour. Many continental
Europeans are trying to forget the past. Young artists, who have no history, are needed to
build a new culture.
So it is a whole new generation of composers who have to attempt a solution to the underlying
problems of music notation. Interestingly, being unaware of the much larger picture, they
even attempt to straddle the gulf which had opened up between the arts and sciences (The “Two
Cultures” debate comes up again in a moment).
Common sense among the
Avant Garde can be summarised as follows:
Standard notation is thought
of as a sacrosanct cultural anchor.
This is really a mixture of technical and sociological problems. The attitude can be further
paraphased as:
“Lets keep the standard notation as it is (because it is sucessful, obviously very powerful,
and everyone agrees about what it means). We can invent alternative notations for the musics
it cannot describe.”
This attitude still persists today, because it is thought that introducing new ways of thinking
about notation would be too difficult. I believe, however, that the existence of
software
now presents us with a solution to this problem.
“Absolute time” is still thought
of as the “real” meaning of the duration symbols.
This assumption comes from the Romantics via the Neo-Classicists, and probably stems from
Newton's time paradigm in the late 17th century.
It is vaguely and wrongly thought that time is freely subdividable.
In fact, Time is indivisible, all that can be done is compare tempi.
4 It is non-sense to use subdivision symbols in music which
avoids a perceivable tempo. But composers did this all the time - Tempo was itself undesirable,
having been worked to death by the Neo-Classicists in the previous decades...
The most important alternative
to standard notation is “Space-time” or “Pianola roll” notation.
This is a precise, low-level mechanical notation which cannot be performed correctly by human
performers because it is illegible. Composers used it where they didnt want an “expressive”,
non-mechanical result (standard notation being thought to describe mechanical time).
There is an assumed equivalence
of notation and performance.
It is assumed not only that a performance can be inferred from a notation, but also that the
reverse is the case: that the notation can be inferred from the performance. This idea derives
not only from the false assumption that time is equivalent to a dimension of space, but also
from the false assumption that performance practice can be completely ignored. If notation
and performance were completely equivalent, composers would have total control over performances.
The power of graphics was thus grossly overestimated. This situation led not only to the writing
of some very beautiful looking scores :-), but even more importantly to the idea of creating
music directly in space on recording media.
Recording devices were machines for converting space into time. I think it is no accident
that electronic music begins to be developed in this post-neo-classic period. Electronic music,
created directly by splicing tapes together, does indeed remove the dimension of performance
practice from the act of composition.
One of the implications of the proposals I'm making is that it should be possible to build
recording devices which read and interpret symbols. Current digital technology is a step in
this direction.
Here are a few examples of how some of the leading composers of the time fit into the picture
im painting of the period (in no particular order):
Cage: When he uses standard notation, he has the inherited Neo-Classical
attitude to it. Use of other notations leads to uncontrollability and Chance as ideology:
He makes the best of a world out of control. Eventually his graphics no longer try to notate
time at all.
Earle Brown: I include him here, because I remember an interesting
encounter with him while I was preparing a score of his for Universal Edition around 1972.
He was adamant that having square noteheads would influence the performers to play the beginnings
and endings of notes more cleanly, more mechanically. This is a good example of what I mean
when I say composers thought of notation and performance being equivalent.
Boulez: A good
description of the notation-performance equivalence can be found in “Boulez on Music
Today” (1963)5. He describes
the achievement of “Striated” and “smooth” time by using notations
specialised for one or the other.6
Stockhausen: For obvious reasons,
I'll say a little more here - but remember I'm not a musicologist, I'm talking from personal
experience of working with the man for 27 years. The definitive text on Stockhausen's attitude
to time (and hence notation) is “...how time passes...” (1956/57)7. In that essay:
The duration symbols are
thought of as being strictly equivalent to (absolute, mechanical) metronomic time.
There is no attempt to
deal with the typographical aspects of symbols - these are assumed to be irrelevant.
There is a fascinating
description of the time continuum which exists from the smallest vibration to the longest
duration.
He distinguishes three main areas: Pitch (shorter than ca. 1/16 sec.), the durations used
in rhythm and metre (ca. 1/16 sec. - ca. 6 sec.), and longer durations. In my view, music
written with the standard symbols is organised around the concept of an event. (An event is
the equivalent in time of an object in space.) In my terminology, Stockhausen is saying that
the smallest events which have a pitch are ca. 1/16 sec. long. I see this as an example of
the way our perception chunks information when this becomes too much to handle. All our perceptions
seem to be layered in this way. At the machine level, the durations of events and the durations
of high frequency vibrations both occur on the same time axis, but at the symbolic level (the
perceived level) they are experienced and notated in different dimensions. In two-dimensional
symbolic notation, events lie at the crossroads between the vertical and the horizontal.
In the 1950s, there was no mature concept of chunking. The subroutine had been invented (or
discovered) in the late 40s or early 50s, but as far as composers were concerned, computers
and their languages were still in the realm of science fiction. So Stockhausen saw (and sees)
no essential difference between mechanical and perceivable time, regarding mechanical time
as being fundamentally percievable.
Consequently: a) he saw no reason why he shouldnt use additional, “accurate” symbols
for describing absolute time (e.g. metronome marks such as 72.5). There is much wishful thinking
of the form: “
Musicians and audiences will learn to hear these values in future, even
if the present generation cant.”
And b) he ignored the part played by tempo in the concept of “subdivision
of time”. Indeed, he saw (sees) no obstacle to the use of unlimited degrees of subdivision,
and introduced a concept of “Fields of imprecision” to describe what happens when
increasingly complex standard notation is used. (This also relates to Cage, Boulez and the
concept of chance.)
8
Stockhausen continues to
agree with Stravinsky's dictum. “Just do what I wrote.”
A personal recollection:
(this comes from a much later period, but it fits in here, because Stockhausen's basic attitude
has not chaged): The score of his
Helikopter-String-Quartet has two parts: The first
part is his original score written in standard notation (the colours just make the relative
heights of the glissandi easier to see); the second part consists of screen shots of what
he saw while using Protools for the mix-down for the CD. (I edited the screen shots, and we
made a few helpful additions while preparing the score for publication.) I well remember him
saying that the second part of the score is the
real score. I think he wishes players
could read such notation.
The leading members of the Avant-Garde give up attempting reforms of the notation and revert
to using the standard notation in some form or other. This currently means writing various
kinds of Minimalist, Neo-Classicic and Neo-Romantic music. Only notatable music can be written!
Standard music notation stops developing.
This was an emergency solution. How else could composers keep writing, and at the same time
work within practicable rehearsal times?
Stockhausen again: Following the experience of trying to work with non-notated pieces in the
late 60s, and at world exhibition in Japan 1970, he now recognises the extreme importance
of developing and preserving a tradition of performance practice. Especially in his late work,
performance practice is now crucially preserved in the recordings he has sanctioned. He goes
to great lengths to train specialist performers how to perform his pieces. These performers
are intended to be the seed for a new tradition. Apart from the use of recordings, there is
no current alternative to using the methods practiced by the followers of Chopin.
For practical, systemic, institutional reasons, many composers have tried to revive the Romantic
vision of the Artist-as-Hero - and in doing so to play down the music world's failure to solve
its underlying technical problems. The image of the composer as poet whistling in the woods
while the notes land on the paper as if by magic, is a tried and tested way to survive commercially.
But as I've already pointed out, the Romantic dream has become discredited, and the position
is by no means comfortable.
The world has used this breathing space in the development of standard notation to revive
performance practices for ancient music. As in the case of jazz, this would have been much
more difficult without the existence of recordings.
Apropos recordings: As I've said, recording devices are sophisticated machines for converting
space into time and vice versa. They can exactly reproduce a performance of a series of irregular
durations. Notice that metronomes can also be thought of in this way: as primitive recording
devices. In a culture without recording devices, metronomes provide the only universal alternative
to personal tuition when one is trying to give meaning to (or retrieve meaning from) a notation.
Its not surprising that metronomes became increasingly important to the 19th century time
paradigm, even though in practice musicians were becoming more and more flexible...
I'd like to end this historical survey by telling you a story about how the internet is changing
the way institutions work. I think its very important to try to keep one eye on the overall
picture.
The story is about my web site and, in particular, about my essay “The Notation of Time”
(1985).
For me, “The Notation of Time” marked a personal breakthrough.
I had decided that solving the fundamental problems in music notation was more important to
me than becoming a composer. While continuing to keep my nose to the drawing board as a practical
copyist, I had been able to stand back from the problems of day-to-day composition, and look
more dispassionately at the problem of music notation. The essay became a frontal attack on
the standard notation and the reactionaries who thought they could take advantage of the Avant-Garde's
collapse. Apart from containing an early version of the above historical survey, the essay
places great emphasis on maintaining a strict distinction between the typographical rules
governing symbols (in space) and the meanings of those symbols (in time). It clearly benefits
from my having spent years pushing ink about on paper without having to think too much about
the symbols' meanings.9 It could
only have been written by a copyist.
On a sociological level, the essay was published in “Contact” magazine in 1985.
Contact was a journal of contemporary music, and a means of communication between young composers
- it was an idealistic enterprise, subsidised the Arts Council of Great Britain. My essay
had, of course been rejected by more scholarly journals because I'm not an academic, and it
didnt follow the usual rules. Contact magazine ceased to exist in 1986 or 1987, and the essay
was soon forgotten, except by myself and a few close friends. Since that time, I've been pushing
these ideas at people in the academic world whenever possible - but the academic world had
its own rules, and nobody took these things really seriously until I was able to publish them
on the internet in 1999. Even institutions cant afford to ignore accessible ideas that may
contain a seed of truth. This is a good example of how the internet has begun to revolutionise
the framework for research and institutions.
The proposals described in Part 2 of this talk will address the following problems:
-
How can performance practice be integrated into a workable notation
theory?
-
How can the typographical (spatial) behaviour of symbols be cleanly
separated from their meanings in time?
-
How can symbols’ meanings be related to local context?
-
How can high-level meanings be notated?
-
How can a performance practice (temporal information related to many
different performances of the same score, or even different scores) be isolated from a particular
performance of a particuar piece? — If performance practice could be isolated
in a piece of software, then it might begin to develop organically again.
-
How can we produce realistic recordings of new scores before they have
been performed live? — We need recordings (not just metronomes) for communicating
performance practice and reducing rehearsal time. We need software which will help composers
achieve their goals within a reasonable time.
Before I go on to talk in detail about the technical aspects of the solution I'm proposing,
I'd first like to say a few words about its social aspects.
It seems to me that the role of artists needs to be rethought. My own view, is that we no
longer need heroes because it is no longer thought that the (material) world is ultimately
explainable. We no longer need a dualistic opposition between material and spiritual worlds.
Artists can no longer pretend to be above nature, but we do still need a few iconoclasts working
outside particular human systems - reminding us that those systems are intrinsically incomplete
(remember Gödel). This also has a lot to do with the question of interdisciplinarity.
The difficulty for this kind of artist is that he/she cant expect any help from the institutions.
On the surface, this probably sounds as if I'm an anarchist - but thats not quite true! The
answer lies in further precision, and thinking in terms of levels. I have always thought that
individual freedom of thought has to be financed with moneys earned with other services to
the community.
In short, I think artists have to become independent of those decadent institutions which
still expect composers to solve their own technical problems while continuing to be “heroes”.
The problem of music notation cant be separated from its social aspects - its about communication,
and is an intrinsically social phenomenon. So we need to think carefully about how the current,
systemic problems can be overcome.
The sociological aspects of software development are only just beginning to be understood.
Most current software has been developed for commercial reasons, and is the result of user
demand. Such software tends to support existing, perceived needs within the existing socio-economic
structures, and so to support those structures.
We are however here trying to transcend the systemic problems of decadent institutions, so
we cant expect much support from that direction.
Recently, the world has seen how software developed in a non-commercial context can cause
great sociological change. Browsers and the internet arose out of a need discovered within
scientific research establishments. They were not the result of public demand, and were not
developed from existing, commercial software. The revolution they caused, happened when the
commercial world and the general public discovered that they made life more profitable and/or
interesting.
So I think that the proposals I'm making could be developed, like
browsers, in a non-commercial context, before they are let loose on an unsuspecting public.10
This presentation develops ideas which have been evolving in my previous papers. These papers
can be found at my web site.
My proposal is for a Graphic User Interface with which music can be edited and developed.
The software and its context have four interacting regions:
-
The User (composer / performer) at the top middle of the following diagram
-
Events in Time (bottom left). These are created and edited by the user with any hard- and
software which may be available. This could be anything from a simple microphone to a large
studio having many synthesisers, samplers etc. Events in time are the “performance”.
-
Objects in Space (bottom right). The “score” which can be edited by the user
-
Beneath the User is a block containing the generalised temporal and spatial definitions of
the symbols to be used in the score. These generalised definitions can also be edited
by the user (vertical, double headed arrow). The definitions are in libraries, and can be
used to produce a score by analysing a performance. These libraries can also be edited by
the user (personalised notation practice and performance practice). A particular score contains
special, local values for the symbols it contains, so it can be performed directly, bypassing
performance practice (just as any other recording such as a CD). The notation practice and
performance practice libraries can be shared between different users, who are able to personalise
their preferences (e.g. for what default effect a staccato dot has on the chord it modifies).
The libraries contain both the “notation practice” (generalised spatial behaviour
of the defined symbols) and the “performance practice” (generalised meaning in
time of the defined symbols) for a culture of users.

Notice that the User is part of a double feedback loop. The score can be edited (object creation,
writing) - whereby default values are taken from the libraries - and performed so that the
events can be heard (listening, event perception). Going the other way round the diagram,
something can be performed (event creation), resulting in visible relationships in the score
(object perception). This double feedback loop was I think responsible for the development
of written, western music. The current breakdown of this double feedback loop is, I think,
responsible for the current lack of development in written music.
Traditionally of course, scores have been written on two-dimensional paper, and “performance
practice” has been developed and stored in the minds of composers and performers.
We are dealing here with music represented on computer screens.
The proposed GUI consists of nested editable windows.14
The libraries which contain the generalised spatial and temporal definitions of the symbols
and analog controls are stored in corresponding, interdependent libraries.

This is the process whereby each event in a series of events (represented in a space-time
diagram) is given a separate symbol (or name), and the event's name is connected to the space-time
info at a lower level. Consider the following space-time diagram:

As far as a machine is concerned, this is a single, undifferentiated curve. People however
instinctively break such curves into manageable chunks. Such chunks can be labeled just by
putting a dot on each peak (the dot might be oval, like a notehead). Alternatively, the labels
could be numbers or letters or duration symbols etc. giving more precise information about
the event.
The lengths of the “events” can be classified, irrespective of the existence of
a tempo, using a logarithmically constructed trammel. Using the classic duration symbols means
that legibility can be improved later (horizontal spatial compression, use of beams), and
it becomes easy to develop closely related higher level notations.

It would be useful, if standard notation of tempoed music could be a special case here. Standard
notation has evolved to be very legible, so it would be a pity to throw away that advantage.
A histogram can always be constructed from the lengths of the events (for example by first
sorting the lengths into ascending order), so if the diagram contained lengths having proportions
2:1 (as in classical music without triplets), then it would be very easy to construct a trammel
to produce a transcription similar to classical notation. If there are no such proportions
in the original diagram, the user might relate the trammel to the shortest length, or try
to ensure maximum differentiation in the resulting transcription. In any case, the user should
have control over the trammel and the transcription.

I'm using space to demonstrate the algorithm here, but non-dimensional numbers (or bits in
a MIDI stream) would also work. Note that beaming (which has been used freely here) improves
legibility, and has no other function as far as this transcription is concerned.
The use of trammels is generalisable for other parameters. Consider the following:

In addition to using the durations trammel (as previously), this transcription has been made
with a trammel for “dynamics” (the height of each event, see above left) and a
trammel for “pitches” (the colour of the event, see below).
(Any steps you can see in this scale from black to white result from
limitations in the GIF standard.
The scale is conceptually continuous.)
Interestingly, the perception of equal steps in both pitch and dynamic is related to logarithmic
steps at the machine level (both the vertical scale and the gray scale in the above diagrams
should be considered logarithmic).
The “pitch” symbols are purely arbitrary here (e.g. I could have used alphanumeric
symbols, and/or the grayscale might have denoted some other parameter - e.g. synthesizer patch).
I've done this to make it clear that there is but a short step from here to something like
standard notation - and I'm trying to rescue as much legibility as possible...
Once the actual values have been chunked and given a label (symbol), the windows S1 and A1
can be completed in the score (GUI). The A1 windows
contain the actual, precise values taken from the machine level of the original events. No
information is lost.
It is quite conceivable, that more complicated symbols could be similarly defined at this
stage - for example staccato dots and accents, classifying particular envelope forms.
All these trammels (or their numeric equivalents) connect symbols with their generalised meaning,
and are stored in the library S1 together with the definitions for how each symbol moves about
in space. (The library is, of course, a software module which can be selected and changed
by users.) Library S1 might also contain a precise default value for each symbol, for
use when inserting a new event in window S1, but the default value could also be a
function of the local context at which such an event is inserted (e.g. the pitch of
leading notes is sharper than other notes...). There may be feedback between the S1 window
and its symbol library (see Music Notation and Agents
as Performers). The default value is not necessarily just the mean value of the possible
range, though this might be a good place to start before getting more complicated.
The character set for event lengths (flags etc.) needs to be complemented by a set of symbols
for vacant spaces between events. The traditional symbols for rests would seem to be
the logical choice (legibility preservation).
Many parameters may have symbols for transformation. But this is not true of durations. Durations
already have a time component, so they cant transform - there is no such thing as an event
whose length changes (!)... Transformation symbols for other parameters may include
-
diminuendo and crescendo hairpins for dynamics
-
glissando lines for pitch
-
general purpose arrows
-
etc.
Here is the transcription from §2.2.4
again:
Legibility can be improved, and a higher density of information achieved, by:
-
moving the symbols closer together horizontally
-
omitting repeated dynamics
-
adding slanted beams to create groups
Group symbols such as beams and omitted dynamics might be defined for the second level symbolic
window (S2).
The “pitch” characters are not necessarily related to pitch - they can
also be used for other parameters. This would reduce the number of symbols whose spatial behaviour
has to be defined. Such symbols have abstract uses - they could, for example be used as general-purpose
slider knobs. Possibly alternative representations for each parameter should be available
(e.g. dynamics with the traditional symbols or as noteheads)
Remarks:
-
Any symbolic level window can contain many, parallel, multi-parameter tracks. Notice the advantage
of using staves and legerlines rather than putting all the parallel tracks on top of each
other on the same graph.
-
In this example, I have used only the stafflines and ledgerlines for the eight
standard dynamics. MIDI velocities might be chunked with a smaller granularity - using spaces
and/or more ledgerlines.
-
There should be a form of “clef” for each staff, indicating the range of values
notated.
-
The view might have two modes: either space-time of horizontally compressed for maximum info
per window.
These notes were the same as the notes I made in
Developing Music with Software Tools §2.2.
I gave the lecture on 28th February 2002. These footnotes were added in March 2002, after
I had returned to Europe.
Except for the removal of references to publisher's problems, this section (1.1) duplicates
material first presented in my presentation “Developing Music with Software Tools”,
delivered at the Centro tempo Reale in November 2001.
return
I recommend listening to the whole piece while following the full score. How should
one notate what the performers actually did?? return
They become involved with conflicting concepts. It becomes difficult for them to know what
to think or what they mean. . . return
At this point I demonstrated in real time why the subdivision of a single segment of time
is impossible. See also the argumentation in
“The Notation of Time”. return
Pierre Boulez: Penser La musique aujourd'hui (Paris, 1963); English translation by
Susan Bradshaw and Richard Rodney Bennet, as Boulez on Music Today (London: Faber and
F aber 1971, especially pp. 91-94. return
As far as I'm concerned, there is only one kind of time. The different forms of this time
should be notatable within the same notation paradigm. I think Boulez gets involved with ambiguities
(bottom of p. 93) for two basic reasons: 1) as a post-neo-classicist, he assumes the notation-performance
equivalence (ignoring performance practice) and 2) because both “striated” time
(where there is a perceivable tempo) and smooth time (no tempo) can be notated using both
1950s standard notation and pianola roll notation. And there is no clear dividing line between
striated time and smooth time. When does a tempo cease to be perceivable? return
Karlheinz Stockhausen: ...wie die Zeit vergeht... in German in Texte zur Musik Band
1 (DuMont, Cologne 1963 - now available from the Stockhausen-Verlag);
Also in Die Reihe #3 - English translation as “...how time passes...” by Cornelius
Cardew, 1959. return
a)The footnote to Stockhausens Klavierstück I says that the complex time proportions
(notated with subdivision symbols) should be worked out and played as tempo changes, but this
does not get us very far because the tempi still have to have mechanical orders of precision.
b) Notice also that the mechanical level corresponds to “hardware”, and the perceived,
symbolic level to “software”. See the chunking procedure in Part II.
c) Notice that there are neither objects (in space) or events (in time) at the purely mechanical
level (the level Stockhausen is dealing with). I think that objects (space) and events (time)
are created by our perception as a strategy for reducing the amount of information it has
to handle. Before chunking occurs, what we perceive is purely amorphous. See also e.g. Goodman:
Languages of Art - Hackett Publishing Company Limited, 1976- : “Experiment has
shown that the eye cannot see normally without moving relative to what it sees; apparently,
scanning is necessary for normal vision ”. So time is necessary for normal
vision. Something similar can be said of ears: holding them stationary drastically impairs
their ability to perceive three-dimensional space. If there are no events or moving objects,
time and space cease to exist... return
The Notation of Time also benefitted from my having begun to program computers (1980), and
from my having read (among other things) Douglas Hofstadter's “Goedel, Escher, Bach:
An Eternal Golden Braid ” (Basic Books, 1979).
return
As I pointed out in “Developing
Music with Software Tools ” , I think that the development of music can
be led by software, not only because it reduces our dependence on institutions but also because
software can be understood and used by people who understand neither how it was written nor
the design decisions which have been taken. It is not necessary for users to be experts on
the design of user interfaces. The whole point of those design descisions is to make the software
transparent, so that users can get on with what they want to do. See also
my email exchange with Nicola Bernadini last November.
return
Part 2: While I presented diagrams very much like these in Santa Barbara, I extemporised their
explanation. The lecture in Santa Barbara was filmed, so it is possible to reconstruct what
I actually said. The explanations here describe the state of these ideas in March 2002, and
are more or less what I said on February 28th. return
The following changes have been made to this diagram in March 2002 (after my return from Santa
Barbara): return
-
Added the arrow at the bottom of the diagram, going directly from the score to the performance
(bypassing the notation practice and performance practice libraries).
-
The arrow between the “performance” and the libraries is now one-directional.
-
Text added below the “score”.
-
The word “Interaction” added between the score and the libraries (the generalised
definitions in the libraries may somehow be affected by the particular values in a particular
score - see my paper Music Notation and Agents as Performers).
return
Changes corresponding to those in Footnote 12 have been made to this diagram (also after my
return from Santa Barbara). return
In Santa Barbara, I was asked about the future performance of music from paper. I think it
is so practical for performers to use paper, that this will continue. It is both unnecessary
and prohibitively expensive for performers to read computer screens. I expect it will be possible
to define one of the symbolic levels to be printable for performers, but composers and editors
dont have to work at that level all the time... return