David MacDonald, Patrick Gullo, Nate Bliton and Sam Merciers were kind enough to invite me to appear on the SoundNotion podcast this week, and I had a great time chatting with them about the work we’re doing on our new scoring application, SMuFL and Bravura. You can download the podcast from the SoundNotion.tv web site, subscribe via iTunes, or watch the whole thing on YouTube. Enjoy!
Hear Daniel on the SoundNotion podcast
by Daniel Spreadbury | May 27, 2013 | MAKING NOTES | 14 comments
14 Comments
Trackbacks/Pingbacks
- Watch Daniel on the SoundNotion podcast… again | MAKING NOTES - […] nearly two years since my last appearance on the SoundNotion podcast, hosts David MacDonald, Nate Bliton and Sam Merciers invited…
Submit a Comment Cancel reply
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Most interesting. Now promise us you will go to sleep instead of living though a user board through the night! 😉
Wonderful, Daniel!
As you were speaking, I had a couple of ideas.
The first is that fonts and other representations need to be extensible somehow. Things like Solemnes notation, not to mention the huge range of notations composers have come up with over the last 100 years and continue to come up with, I don’t expect a new program to be able to do those things out of the box, but I would love to be able to do such things with plugins–the main thing is that I don’t want the program to represent data in such a way that makes it impossible.
As I think about that, it seems to me that the notion of sound objects might be useful. Sound objects are born, live, and die. A score could mostly simply consist of a collection of these objects, along with their birth times. (It would also probably be good to have a concept of a stream: a set of sounds in sequence from a single instrument.) Your program would provide support for the common objects (notes played by different instruments, and perhaps recorded clips with certain transformations applied) and would be extensible so that other objects could be created (new data types, new transformations, and so on–ideally you’d be able to plug in a program to render the object. Note that such renderers could even respond to external input.)
In addition to the auditory rendering of sound objects, they would need one or more visual renderers. (Perhaps two at minimum: one for printed scores, one for the interactive screen.) Allowing the same sound object to be rendered in different ways in different sorts of scores can solve a lot of problems. Allowing new renderings for existing sound objects via an extensible framework will keep the program from becoming obsolete AND encourage creativity. It would be possible for visual renderers to, in addition to providing printed output, even operate on a video screen in real time, making AV compositions possible.
Finally, there has to be a way to create and edit the sound object itself. I don’t have much to say about this except that it too needs to be extensible.
It seems to me that the auditory representation of the sound objects needs to be the primary representation–the canonical representation.
Once you figure out the framework, then it’s relatively easy to create a set of sound object types, along with renderings for the interactive screen and print and playing. A lot of hard work, but when you want to support new sounds, new editing, and new appearances, the basic framework is there.
This scheme also allows the flexibility to change input methods easily. One of my difficulties with Sibelius was always that for mouse input, it DIDN’T behave like every other Windows app, especially regarding selections when clicking the mouse. (It a higher level it’s excellent; it’s at the note and chord level that I’ve been frustrated. The biggest is that objects disappear while editing in ways I wish they wouldn’t.) With a scheme like this, it would be relatively easy to have different input protocols so that existing users wouldn’t be inconvenienced while new users would have a more Windows-like UI. And it would make it more possible for handwriting recognition on tablets with digitizers and pens (like many Windows tablets).
The key thing, though, is that the extensible framework makes it possible to do part of what we need today AND to add in new functionality later–perhaps even very deep functionality as third-party plugins.
I’m very excited for the opportunities you have and what you’re doing already! Please keep up the excellent work!
Daniel – a few requests to add to your wishlist for development of “TheThing”:
1. Record audio/MIDI into Cubase in Rewire playback sync with TheThing
2. Playback of parts on slave monitors in sync with Conductor score on master monitor
3. Cloud computing version so one can keep scoring at the airport, on iPad, or anywhere with web access by logging on, and keep going if software is on the device with score mirrored on the Cloud.
4. Continuous Auto update so no version is ever out of date, i.e. everyone is always using the latest version. To fund this approach, I suggest a licence fee rather than outright purchase.
I’m sure I will come up with more!
Insightful interview! One question that popped into my mind when you were talking about the possible platforms the Thing may appear on: Is there any chance of a Linux version?
@Christian: Right now we have no plans for a Linux version, I’m afraid. Sorry!
Daniel – following on from my post above:
…
5. Backward compatibility with Sibelius scores, ideally with Finale et al. too
6. Forward compatibility, ideally by Open Sourcing the new application (as you have already done with your new Bravura font), so if Yamaha “does an Avid” on Steinberg, we’re not left with yet another music software carcass.
@Derek: You and I have already discussed these issues privately, so it’s a bit naughty of you to publish them here in the hopes of attracting lots of “me, too!” responses from other users! Our new application cannot be backwards compatible with Finale and Sibelius, since the file formats of those applications are protected by copyright, and cannot legally be reverse-engineered. Our application will, however, support MusicXML as fully as is practical for both import and export, to try to create as good options for interoperability as possible between scoring applications.
Our application will also not be open source, which is a simple business decision that we have made. But we are not deaf to the concerns of users who are worried about putting their work into a proprietary application, and our application will contain not only good options for exporting files into other formats (graphical, audio, and semantic), but also a sufficiently sophisticated scripting API that it will be practical to develop exporters for other formats should they be required.
Could the API also be used to make controllers, or is it just for the sorts of scripting plugins we know and love from Sibelius?
@David: What kinds of controllers do you have in mind?
I’m thinking something like a MIDI controller like the one sitting in front of me in the video, or a tablet app that could display contextually relevant controls and options, like a more powerful and tangible version of Sibelius’s palette.
@David: The honest answer is that, at the moment, I’m not sure. The simple scripts we can build right now basically execute and then stop, whereas you would need some kind of listener mechanism in the software to be ready to receive input from a controller at any time, and that may or may not be a good fit for the scripting layer. However, I do think that support for external controllers is desirable, so we will no doubt try to find a way to enable this, but it may not be achieved by way of scripts. Sorry I can’t be more definite at this stage.
Fantastic Daniel – good luck with everything. Here’s what I would find very interesting, a full integration of the notation features of The New Thing with Steinberg’s Cubase / Nuendo, using all of the playback (midi and audio) features of Cubase, such, that notes in The New Thing can be manipulated simultaneously in Cubase (including automation, plugins, and tweaking of midi notes with expression etc). That would allow a much greater control over the audio output than is currently possible in Sibelius and would make the transition between computer music and printed music for live recording and performance much smoother. By tapping into the existing technology of Steinberg you could save yourself a lot of time, and Steinberg would have the edge on everything else out there.
Great job, Daniel! I’m really looking forward to hearing how this develops. This blog is now in my daily reader.
Realy great interview. Nearly 9 years I have been used Sibelius as great tool for learning and creating music. I hope, than new software will be much “smarter” than Sibelius and Finale. I cant wait for release of first version….