This Study began in the spring of 2010 as part of the process of developing and
debugging Moritz’ Assistant Composer and Assistant Performer modules.
It began as a test case for the use of MIDI control texts written into standard notation (CapXML) scores by the Assistant Composer,
but ended as my final farewell to standard, 19th century music notation.
These scores all sound very similar. They use the same input
krystals and therefore have the same form, but the palette structures changed slightly for Study 2b1, again
for 2b2 / 2b3, and again for Study 2c, so the audible results are slightly
different. There are however considerable differences in the way these scores look
graphically and in the way they are saved as computer files.
The audio and score of Study 2a, completed in July 2010.
The audio and score of Study 2b1, completed in October 2010.
The audio and scores of Studies 2b2 and 2b3, completed early in May
The audio and scores of Study 2c, completed in March 2012.
Study 2 is also about polyphony. Study
1 had been homophonic, and I wanted to create a polyphonic composition algorithm.
Avoidable complications arose however, because Study 2 was originally restricted
to standard music notation:
- standard music notation assumes a common tempo between parallel staves. It
therefore insists on making duration classes ‘add up’ within each bar.
- there is no humanly perceptible reference tempo between the staves in Study 2.
The musical structure of Study 2
I therefore decided to implement the scores of Study 2b2 and 2b3 using non-standard
Each bar in each staff contains between 1 and 7 chords, corresponding to the values
in a krystal strand. In the original standard notation (Studies 2a and 2b1), I had
to make the bars ‘add up’ in the parallel staves, so the bars originally
had lengths of 1, 2 or 4 quavers, and the chords used invisible tuplets per bar
Each chord in the top staff was a (tuplet) quaver having a separately defined millisecond
The durations of the chords in the top staff are neither related to each other by
a common tempo, nor are they related directly to the duration class symbols. This
makes the scores of Studies 2a and 2b1 quite difficult to follow.
In 2b2 a simple graphical notation is used, and in 2b3 the standard duration class
symbols are associated with ranges of duration — as in my earlier,
handwritten pieces and my transcription of Curtis Roads’ Sonal Atoms.
The temporal information, common to Studies 2b2 and 2b3, can be represented graphically
in an unlimited number of ways. I chose two:
When a performance of a score is started, the Assistant Performer first creates
a ‘MIDI-Score’ containing lists of MIDI messages waiting to be sent.
This information can either be played using default timings by the assistant alone,
or by a live performer who triggers the events in real time.
In 2a and 2b1, the MIDI information is inferred from the graphics — noteheads
in the chord symbols, and visible control texts attached to the chords in the (CapXML)
In 2b2 and 2b3, the MIDI information is read entirely from
to the standard
format in which they are written. This decouples the (spatial) graphics
from the temporal and MIDI information. The Assistant Performer can easily create
a ‘MIDI-Score’ from the temporal and MIDI information alone, without looking at the
Arbitrary graphics can thus be associated with the temporal and
In a little more detail: At the top level in this extended SVG file format, MIDI
instructions, together with the default timings necessary for their machine performance,
are associated with symbols at the System
are abstractions which
relate to the finite size of pieces of paper and the need for parallel event symbols
(polyphony). Note that these abstractions do not imply any particular notation.
They are just a standard hierarchy of containers which allow parallel time axes
) to be represented on 2-dimensional screens or pieces
of paper. The shapes and complexity of the graphics depend entirely on the authoring
software. The symbols can be of any shape and complexity. Staves could be notated
vertically as far as I'm concerned...
The Assistant Composer now writes the specialized graphics, the default temporal
information and the MIDI information into the score's file(s), using information
taken from its input palettes. The graphics are just for human consumption, for
reading ahead, performing, structural analysis etc.
uses elementary symbols (like
the coloured numbers in the Assistant Composer’s “Why ornaments?
” documentation) spread across simple,
replaces the simple symbols of
2b2 by standard chord symbols on standard, five-lined staves. The symbols are spaced
across the systems according to a sophisticated algorithm similar to one which might
have been used for standard notation. This score looks more like Study 2b1
, except that
the duration classes are now related to the durations notated inside the score.
The Assistant Composer can now write scores in which standard chord symbols can
be placed freely at any point on any staff, making the composition (and automatic
transcription) of polyphonic music much simpler. The default (machine-defined) durations,
and the symbols which represent them, are now completely decoupled from any assumptions
about perceived time. I am hoping that this will eventually enable me to write music
which can breathe again...
: Any kind of notation can be
embedded in this extended SVG format. I have in mind: Gregorian Chant, standard
notation, ordinary language text, non-standard notations of any kind — even those
which are read vertically, or are animated etc. The format can also, with a little
work, probably be used to connect timings to areas inside scanned images.
: Even if the MIDI information
is omitted, the default object/event timings can still be associated with positions
within 2-dimensional graphics of any kind. Imagine a cursor following an audio or
video recording of an ancient or 20th-century manuscript score, or being able to
click on the graphics/image to jump to a position in a recording. Maybe one is learning
: Extended SVG files can be edited
using programs such as Inkscape
without losing the non-graphic extensions
. This means that text and graphic
annotations can be added to such scores without making them unperformable. The annotations,
which would be purely for human consumption, could have academic uses or be performance
instructions. For example, my Assistant Performer knows nothing about slurs
because they mean something too complex to be translated into MIDI instructions.
The performance of a slur (phrasing) is not something that can be mechanically fixed.
Phrasing is intimately related to the uniqueness of a particular performance, and
has to be understood as part of a living performance practice tradition. I can well
imagine adapting my old “convert to slur” FreeHand Xtra
for use in Inkscape.
: SVG is a format which can
be displayed by most of today’s browsers, and it has interactive capabilities
which I hope to exploit further in future. On the agenda are user performance of
on-line scores, and the assisted performance of unfettered scores in real performances...
Moritz produces MIDI output, but the sound of a MIDI file is more or less undefined
unless one has access to a system like the one on which it was created. This is
especially the case in extreme examples like Study 2, which contain rapid sequences
of control messages.
My reference synthesizer for Study 2 was the Microsoft GS Wavetable Synth
(supplied as part of the Windows Vista Ultimate 64bit operating system), which I
played using either Moritz’ Assistant Performer or Windows Media Player.
The present recordings were made by balancing all the settings as well as I could
using that system.
Other MIDI performance systems may give a completely wrong impression of the piece.
March 2012: Quicktime, for example, not only has different timbres and balance from
my reference system, but it also appears to respond more slowly to control messages.
This can mean that a patch change, for example, may only take effect after
the chord for which it was intended. In Study 2, that is pretty catastrophic. Not
only the balance, but even the logic is wrong! On my system, Quicktime has been
known to give up
altogether, two or three bars before the end of the piece. It still
sometimes omits the final chord in the performance.
It is to avoid such problems, that I have converted all the original MIDI files
to mp3s for this website.
The mp3 files were created by simply converting Moritz' MIDI output with
WAV MP3 Converter
. This converter not only converts between many different
audio formats, it can also convert MIDI files to audio. I also tried converting
the MIDI files to wav, but the increase in quality is too small to justify the increased
file sizes (and corresponding load times) at this website.
Remember that it is the information in the score and corresponding MIDI files that
counts here: The audio files are presented only to ensure that the sounds heard
at this website are logically
correct. Stronger, more transparent, better
balanced performances can doubtless be produced from the MIDI files by specialists
using more powerful synthesizers and/or post-production software.
This is a matter of expertise and interpretation. In other words:
I would very much like to hear other interpretations of the score and MIDI originals,
maybe using other sounds, synthesizers and/or post-production software!