We’re now a few days into April and our team has been working for Steinberg for five months (we started work on 5 November last year, which happened to be my birthday). Although I can’t share lots of details about what we’re working on, perhaps a few details of what we’ve been up to will be intriguing enough to be interesting.


With the exception of a visit to Hamburg to get to know some of our new colleagues in the middle of the month, for the rest of November we sat together in our temporary office with our notebooks on our laps, having a series of discussions about different functional areas of our future scoring program, essentially a big brain dump of the requirements we could remember from our days working on that other famous scoring program, and ideas we have about how we could improve each area.

A page from the notebook I filled in November 2012.

A page from the notebook I filled in November 2012.

Some really good ideas came out of those discussions, and indeed by the end of the month we had already settled on the basic conceptual model for how our new program would think about music, and how we could design the program in such a way that it would provide greater flexibility and freedom to composers and arrangers than other scoring programs.

Once all of us had filled our notebooks, we started transferring our notes to the internal wiki, fleshing them out and categorising them. By the end of the month, we had a pretty good high-level list of requirements, spanning more than 60 different functional areas. With all of these notes now safely transferred to our collective outboard brain, we were free to move on to the next challenge.


Having spent a whole month with the whole dirty dozen of us sitting together every day, we now started to divide and conquer: the testing team started generating test data (in MusicXML) based on the requirements we had discussed to date; the programmers started sketching architecture ideas on whiteboards; and my partner in crime Anthony (@anthughes) and I started discussing how we would start designing the user interaction and design of the program.

We also started going out to meet with music professionals and publishers. I was invited to attend a meeting of the Music Writers’ Committee at the Musicians’ Union, which led to our group discussion and brainstorming session a couple of months later, and along with Anthony and James (one of our wonderful programmers) we went out to meet with the editorial and engraving staff at Peters Edition and Boosey & Hawkes.

Anthony and I also went up to Amersham to meet with Bev Wilson, a very experienced engraver who worked for nearly thirty years for Halstan, which until the 1990s was one of the busiest music engraving houses in the UK. Halstan’s approach – which they called The Halstan Process – was unusual in as much as it was based on photographic reproduction of over-sized positive originals, produced by brushing ink (from a water-soluble ink block) through metal stencils onto white paper. For large-scale orchestral or band scores, engravers would sometimes have to work with the top of the page flipped up, over their heads and behind them to avoid it trailing on the floor in front of their desks! You can see and hear Bev talking about The Halstan Process in this video, produced for the Open University:

We were very interested to talk with Bev because we wanted to examine music engraving from first principles, not assuming that we knew how music spacing should be done just because we had previously worked on another scoring program. These days, Bev still works freelance as an engraver, mostly on choral music for Oxford University Press, though of course now he brings his experience to bear on his work using Sibelius rather than ink and stencils.


In the New Year, I started drafting the design for our application’s default music font. In keeping with our general approach to look further back than the computer engraving of the past 25 years or so, and to examine what was done before even music typewriters and other such short-lived technologies challenged the way that music had been prepared for publication for the preceding 200 years, I canvassed some experienced musicians to determine which scores from particular publishers in particular eras they especially liked the look of. Some of the more experienced engravers remembered the dry transfer system Not-a-set, which was used for a couple of decades after traditional engraving was deemed to have got too expensive and before computer engraving was capable of producing acceptable results.

Not-a-set was based on a set of engraving punches used by Schott, in turn based on the punches used by Breitkopf & Härtel, the world’s oldest music publishing house. Not-a-set would serve as an excellent starting point for a new music font, because the symbols are printed on the dry transfer sheets unencumbered by staff lines, so you can really examine the shapes of the symbols very closely in order to draw them in a vector drawing or font program.

Getting hold of Not-a-set wasn’t easy, since it hasn’t been used by anybody in anger for more than 20 years, but thanks to the generosity of Bev Wilson in the UK and Peter Simcich in the US, I was able to scrape together enough examples to be able to produce digital versions of the majority of the basic symbols.

On the left, a scan of the Not-a-set G clef; on the right, a vector version.

On the left, a scan of the Not-a-set G clef; on the right, a vector graphic version.

When compared with, say, Opus, the look is more substantial, and in general the music appears a little bolder and blacker on the page, which aids legibility when reading at a distance.

Anthony, James and I also continued visiting music publishers, including a meeting with Paul Tyas and Elaine Gould, author of the wonderful Behind Bars, at Faber Music, and I also had a couple of very productive meetings with professionals working in the field of musical theatre, to get a feel for the specific requirements of people working in that field.


As the winter months dragged on, we kept warm in our basement office by slaving over our hot keyboards. The testing team had by now generated dozens, if not hundreds, of test cases in MusicXML format, exporting them from other scoring programs and stripping out data that doesn’t serve our purpose via XSLT, in some cases having to hand-code specific details because the MusicXML exporters doesn’t handle them, or indeed the scoring program itself doesn’t handle them.

Meanwhile, the programmers had started to build the lowest levels of the musical brain of our new scoring application, including fundamental things like deciding how pitch and duration of notes will be stored. One of the programmers spent a couple of days building a simple piano roll event view able to display pitch and duration of notes – but still no actual music notation display at this point. The simple piano roll display and the low-level engine were lashed together into a test harness application, which could import certain primitive types of data (notes, but not even rests at this point) into the low-level model. The very first steps had been taken!

I carried on working on music font design, and soon discovered that I needed to take a step back and consider how the font itself would be set up. Over the course of several weeks, I surveyed many existing music fonts, the Unicode range for musical symbols, and the standard texts on music notation to try to create a categorised list of all of the symbols used in Conventional Music Notation (Donald Byrd’s term). Although this work is far from complete, I have now built a list of around 800 unique symbols, divided between nearly 60 different categories. I will write more about this mapping and how I hope it can become a new standard for people who want to design music fonts in future.

Anthony and I also spent considerable time locked away in the broom cupboard-style meeting room in our temporary office discussing further user interaction principles. Being able to think freely about how every aspect of the application will work without worrying about having to fit features into a mature application with well-established idioms for interaction is incredibly liberating, and I am confident that we are coming up with better, simpler and more efficient workflows for inputting and editing notes and other musical objects in your score.


Settling in to our new office near Old Street.

Settling in to our new office near Old Street.

At the very end of February, we left our temporary home near Kings Cross and moved to our new permanent home a couple of miles to the east, a short walk from Old Street station. We now have plenty of room to spread out and, hopefully, to grow our team in the future.

The burgeoning test harness for our new program’s musical brain is now able to display real musical notes! It’s only a single line of notes, only the rhythms are displayed rather than the actual pitches, and the spacing is crude, but as a demonstration of the next steps in the musical brain’s understanding of music it’s an exciting moment: from the fundamental way in which pitch and duration are stored we can now see notes of the correct duration that can reformat and reflow themselves as they are edited. Baby steps, but important ones.

Another important step was the first few functions in our application’s scripting API. At the moment we’re using Lua, since it is highly portable, small, fast and efficient, well-suited to embedding, and already used with great success in a number of high-profile applications (including Adobe Lightroom) and game engines (the middleware used by developers to handle 3D graphics, physics, game logic etc. in games for consoles and PCs). The hope is that we will be able to deliver a fully-featured scripting API that will allow users of our application to create sophisticated scripts that can add valuable functionality to our application. Early investigations of using the Koneki IDE to write and debug scripts running in our test harness applications are promising, which means that script developers should have a very comfortable development environment for writing scripts for our application.

To help our testing team verify that the musical brain is doing the right kinds of things when performing simple editing operations, we’ve also started work on exporting MusicXML files from our test harness. Data can now be imported into our test harness via MusicXML, and then exported again, making it possible to see what is happening to the music in another scoring application.

By the end of March, I had completed first drafts of all 800 or so musical symbols that will be included in our new music font, and once our application is capable of displaying more than simply note durations, I will be able to iterate those designs and get more and more of the font looking great.

And the next five months…?

It’s too early to guess what the state of our burgeoning application will be in another five months. We are taking our time over the fundamental design decisions because we want to make sure we build the most forward-looking, flexible system possible. I will keep you posted!