For the 19 years before Stockhausen began to use computer technology to publish his scores,
I copied them at a drawing board with pen and ink.
I bought this computer because I wanted to reduce the amount of paperwork associated with
an
Abstract Data Type I was developing. Im
interested in algorithmic composition.
In 1993, the then current version of Finale was hopelessly inadequate for the problems confronting
us, and I had to find some answers pretty quickly in order to keep my job. I was already programming
in C, and was thus able to write a special filter to improve Finales postscript output
enough for that to be edited economically in FreeHand.
FreeHand is a very powerful graphics editor, used routinely in the advertising industry. Its
a great program, with lots of money behind it - but it doesnt know anything about music, so
I began writing Xtras for it when that became possible in 1995.
I have never had to deny Stockhausen any of his most extreme demands as the result of inadequate
software, so I am not convinced that publishers basic problems have anything to do with
that side of things. Ive brought a few example scores with me, if anyone wants to look
at these later. Some example pages can be found at my
web site.
Im currently looking about for new ways to finance my research & development projects.
The commercial music publishing world is at present dominated by two major programs: Finale
and Sibelius, and I have decided to learn the latter. I might just as easily have chosen the
latest version of Finale. Probably I was irrationally influenced by the fact that Sibelius
is a British program :-). Both these programs meet the minimum graphic requirements for publishing
conventional music, and they can both be used to input such music in a commercially viable
time - but, especially in non-conventional music, neither of them provide the precision, power
and flexibility achieved by combining FreeHand with my Xtras.
Fundamentally, I think that useful new ideas (theories) are generated by people who deal very
directly with real, practical problems. Practicalities take precedence over, and are the basis
for, theory.
A Graphic User Interface like this would be useful for orientation in recording studios. Symbols
are easier to read than space-time diagrams.
We perceive pitch, not frequency (i.e. A-natural, not 440Hz.). We cant count the 440Hz. References
to absolute time occur only in windows A1, A2 etc. where we can adjust machine-related parameters.
There are no such references in the symbolic level windows, because these contain symbols
which are simply the
names of "subroutines". No symbol has a fixed, absolute meaning.
The meanings may be arbitrarily complex, and are defined in the library modules.
A similar argument forbids the use of absolute metronome marks in the symbolic level windows.
The symbols are
names whose meaning depends on the meanings and relative positions
of other symbols in the vicinity. Their absolute position is independent of those meanings,
and can be used for the practicalities of concentrating information within particular spaces
(windows or pages) and for improving legibility.
Music notation has from the very beginning been a reminder of what already exists (think of
the history of Gregorian Chant). If no libraries exist defining symbols and their meanings,
then those libraries have to be created by referring to existing events. Its only possible
to work clockwise round Figure 1, so that a development spiral begins as the result of user
space-time feedback, once the libraries exist.
Again, the 19th and 20th century assumption that music symbols could be given fixed, absolute
meanings, meant that performance practice was not properly integrated into the notation theory.
There was an inadequate idea of what
expressivity is supposed to be.
High level grammars only develop in a written culture. A Bruckner Symphony cannot be improvised.
So its important that the conventions and limitations of writing are well understood. Otherwise,
as in the last hundred years, the development of written music will run into trouble. Maybe
high level grammars are what music
is.
Theres no absolute time, or absolute meaning in the symbolic levels.
Absolute time is irrelevant to our perception of music - so it should play no part in any
analysis of what the symbols mean to humans. The concept of absolute time is only relevant
when dealing with physical machines. This happens at the clearly defined level.
The symbols of human languages are unlike the symbols of mathematics. Their meanings are not
primitives, but complex
context-related functions.
The idea of Relativity is closely related to the idea of local context in physics,
this is because there is a maximum speed for the transmission of information.
I think musics time notation paradigm collapsed at the start of the 20th century at
the same time, and for very similar reasons, as the time paradigm in physics did.
19th century notation theory, like 19th century physics, assumes the existence of absolute
time, so it gets confused about the notion of subdivision. The 19th century symbols
for the subdivision of time (tuplets) actually mean tempo relationships, so they are meaningless
in tempoless music (one can only compare tempi if they exist) - this was a real conceptual
mistake in the notation theory of the1950s and 60s. Using tuplets to write music which avoids
perceptible tempi is simply non-sense.
Tempo and the ether: tempo was ubiquitous in 19th century music, and it plays a role there
as a frame of reference not unlike the role played by the ether in physics. Many kinds of
music have no tempo, but 19th and 20th century notation theory persistently assumes that an
absolute tempo must exist - even if that tempo cannot be perceived!
Metronomes. In music, the only relevant durations are those in the local context and those
which can be retrieved in local time from long term memory. And in music, short-term memory
takes precedence. For example, even when performing music which is supposed to be at tempo
metronome=100, the local tempo which is actually happening takes precedence over the absolute
value stored (aproximately) in long term memory. All human space-time experience is local.
We only need absolute values when dealing with machines.
The symbols of music notation are the
names of concepts which have been learned by
composers and performers (time is not just equivalent to a dimension of space), so music notation
can be thought of as an authoring or programming language whose classic interpreters are
people.
An
event is the equivalent in time of an
object in space, and the nested structure
of the Symbolic Level Windows very much resembles the way in which current computer programs
are structured, so it ought to be possible to develop this project easily using existing
object-oriented programming languages.
However, there are implications for the development of programming languages here too.
The lowest level music symbols are as small as possible (single characters or simple lines).
These symbols are combined two dimensionally to create larger, more complex symbols and a
maximum density of legible information in the two dimensions of the page. This is important
when having to read in real time, extract meanings at the highest possible level, and turn
as few pages as possible while doing so.
Current computer programming languages are based on alphanumeric text, and the symbols (the
names of objects or functions) are generally word-sized. The words are groups of characters
chunked in a single dimension. The chunking in this case enables an unlimited number of atomic
symbols to be created from a finite character set. Interestingly, such text is usually formatted
in two dimensional space so as to increase (human) legibility. Contrast this situation with
that of ordinary text, where a single string of words and punctuation is simply folded onto
the page.
It may be possible to create specialised computer programming languages, for use outside music,
in which an increased density of information is achieved because they use character-symbols
arranged two-dimensionally instead of as one-dimensional word sequences. The compiler (parser,
interpreter, performer) would have to be more complicated, but the script could be smaller
(faster to transmit). Note that atomic symbols arranged in three or more dimensions have a
still higher density of information... (Im thinking of the structure of protiens).
I think that
local context is an important concept in both space and time, and that
its active application to the use of symbols, in all areas of human cognition (philosophy,
literature, painting, architecture, music, computing, mathematics, the sciences etc.), would
have far reaching consequences.
This is rather a big subject, and I have a separate paper
here which
covers it in a little more detail. Briefly, this possibility arises because in this User Interface,
the score consists of programmable symbols with clearly definable meanings. The symbols are
the
names of chunked concepts at lower levels, and the
meanings of those names
are stored in libraries. So the libraries as a whole can be thought of as the interpreter
of the multi-level score. This would be very difficult to achieve using standard notation,
because the symbol levels are all collapsed onto the one level of a sheet of paper. And some
symbols have to mean different things in different levels. In 20th century standard notation,
they are even connected conceptually to absolute time - the non-symbolic machine levels.
Within the multi-level notation paradigm, it is possible to analyse the differences between
lots of different performances of the same score - also by different performers. If this software
existed, it would become possible to store the the mutilevel score of a Beethoven Sonata on
one CD, and the Agent for performing it on another. The Agent might, for example, be the result
of a contextual analysis of different performances of the piece by Alfred Brendel. The Alfred
Brendel agent could then be used to interpret a newly discovered Beethoven Sonata which
the real Alfred Brendel had never seen - and the performances would vary, just as they do
with real performers.
A less ambitious but more important goal than this rather futuristic scenario could be the
use of trainable Agents by composers trying to develop a performance practice for the symbols
they are using. Composers know best what they mean by the symbols they use, and are thus the
best people to program the meanings. If composers could present performers with demonstration
recordings of how the notation should be understood, that would shorten rehearsal time considerably.
The use of libraries in this situation means that performance practice could begin to develop
again together with the composers grammar.
Notice that current recording technology uses a flat, one-dimensional data stream to store
performances. This proposal has interesting implications for the recording industry - and
for publishers, because they could sell their heroes directly on CD. I can already see them
fighting about who owns Stravinsky, Glen Gould and Karajan
... :-) ...