Editorial Note: Over the past few months, we have noticed many impressive apps built by Tero Parviainen of creative.ai popping up on Twitter, many of which use Magenta’s music models. His work helped inspire us to focus our efforts on an easy-to-use JavaScript API called Magenta.js, and Tero has been kind enough to lend some of his expertise and code to the effort. We believe this API signficantly lowers the technical barriers for others to participate in the kind of creative development Tero has been exploring. In this post he shares some of his process for developing Magenta-powered musical apps, which we hope will inspire you to build your own. It certainly has inspired us!

I’m one of those people who always loved music but never became a musician, and was left feeling vaguely wistful by what could have been. That is until a couple of years ago, when something connected and I found a way to make a lot more room for music in my life while not straying too far from the path I was already on professionally.

The key realization was that even though I was not a musician, I could take my existing skills and interests in software development and design and use them as a lens to point toward music. This illuminated the direction I’ve been heading in ever since: Exploring intersections between music, software, design, and AI - and having a blast doing it.

So far on this journey I’ve studied and reproduced some generative classics by Brian Eno and Steve Reich in JavaScript. I’ve taken a deep dive into the wonders of Terry Riley’s In C. Last year I had the privilege of bringing Laurie Spiegel’s seminal intelligent instrument Music Mouse to the web platform. And a few months ago I got a chance to sum up my thoughts on generative music in a presentation at Ableton Loop.

Most recently I’ve been examining some new technical possibilities on this path, brought on by the very recent arrival of deep neural networks and their musical applications to the web browser. Libraries like TensorFlow.js and the brand new Magenta.js bring with them a whole new set of toys for web developers to play with, in the form of various models designed, trained, and shared by the Google Brain Magenta team. So I’ve used these models to make a few experiments.

Each of the following demos represents a different way I’ve tried to answer the question “what is a fun thing I could do with this tech that can help me connect with music in a new way?” Each one has taken from a few days to a couple of weeks to complete, which I think speaks volumes about the power of these tools - not to mention the power of the web platform as a universal medium for interactive music!

A Melody Autocompletion Tool

Most of my experiments have revolved around the different RNN models Magenta provides, which have the capability of predicting (or generating) continuations to pieces of music you give them. They can be used to answer the question “if this is how the music begins, where might it go from here?”

The most obvious application of this tech that I could think of was autocompletion: Playing the beginning of a phrase and letting the machine improvise a continuation to it. This is basically a tool for idea generation. It’s using technology to throw you things you wouldn’t necessarily expect, whichthat you can then develop further into the next thing you play.

I’d already seen this kind of RNN-powered ideation explored in domains such as text, handwriting, and sketching. Applying the idea to music resulted in the Neural Melody Autocompletion tool. It gives you a musical keyboard, on which you can play a chord or the beginning of a melody and let the Magenta model improvise around it:

This experiment was based on Magenta’s ImprovRNN model. The cool thing about this model is that it allows for constraining the outputs on specific chords, which helps give the neural network more information about the user’s intent: If you play a recognizable chord, the machine will be told to improvise specifically with that chord, usually leading to more musically satisfying results.

I was later elated to hear this little tool had been used at a machine learning workshop for kids, who took to it right away:

What we’ve noticed, that kids grabbed immediately the constraints & possibilities of the first (delay-feedback) system. They started to play around with it, they changed their mood for playing, they were both acting & listening. What was more surprising is that playing together with the neural network was also very smooth: they understood that they have to wait for the next improvised “answers” from the system, they also integrated these answers into their own playing technique.

Visual Music & Machine Learning Workshop for Kids by Agoston Nagy

An Arpeggiator

The autocompletion experiment led fairly directly to the next one, which explored another way to let a machine “augment” a human player’s intentions in the domain of melodies: Arpeggiators.

Using the Neural Arpeggiator’s MIDI I/O to play some serendipitous arpeggiated patterns on a synthesizer.

Rather than simply being an idea generation tool, this is starting to resemble an instrument you can actually play, though it’s still a rather unusual way of playing because of the unpredictability of the patterns you get. Still, I’ve spent many rather enjoyable hours playing around with it.

Like the autocompleter, this experiment was also based on ImprovRNN. But this time I didn’t use the output of the model directly, but applied some postprocessing to get more appropriate results. While all the notes still come from the neural net, I ignored all their durations and simply used the pitches as a steady pulse pattern typical of synth arpeggiators. I also looped the output into a repeating pattern instead of generating an endless stream of ever-changing melodies.

A Drum Machine

Having done these experiments in the realm of melodies and having seen the intriguing results, I turned my attention to Magenta’s Drums RNN model to see what I could do with percussion.

Just like ImprovRNN can generate continuations to melodies, DrumsRNN can generate continuations to drum patterns. Thinking of how to build something interactive on top of this led to the idea of a drum machine with a typical step sequencer interface, but one that would be jointly operated by a human player and an AI assistant. This became the Neural Drum Machine.

In this experiment you provide a “seed pattern” on the left side of the screen, and then let the neural network generate a continuation on the right side. You’re free to choose the lengths of the seed and continuation patterns, as well as change the pattern to your liking once it has been generated. Adjusting the temperature slider affects how wild the generated patterns get, by controlling the random factor used in the model’s predictions.

I had previously made a similar drum machine using nothing but pure randomness, and was pleased to find just how much better the results got when they were generated by a neural net trained on real world percussion data. I found the results quite musical!

Latent Space Exploration

My most recent Magenta-based experiment, Latent Cycles, is a departure from my previous ones in a couple of ways.

Firstly, rather than using a plain RNN model, in this experiment I used one of Magenta’s MusicVAE models to explore musical space in a new way. This model does not simply continue individual note sequences, but lets you play with whole fields of related patterns.

Secondly, this experiment is not really meant to be a creation tool for a musician, but more of a way to listen to a generative space.

Each of the four corners in this space is a little melodic pattern generated using ImprovRNN. The space is then filled in with a 2-dimensional interpolation between those four corner patterns, so that each “intermediate” pattern can also be sounded. You, the listener, can then listen to the patterns of your choosing by activating and deactivating different ones, experiencing the gradual morphing of patterns over the two-dimensional space.

When you activate many patterns at the same time, you’ll start to hear the differences and similarities between them. I applied some simple algorithmic techniques to add rhythmic interest when multiple patterns are playing, giving rise to a somewhat Steve Reich inspired pulse pattern field of mallet instruments.

There’s an idea contained in this experiment of blurring boundaries between what it means for something to be a musical instrument and what it means to be a musical piece - and between composers and listeners for that matter. Part of the music is generated by Magenta’s VAE model. Another part was created by me, as I set up the system, the rules, and the UI. But the final part is filled in by you, as you listen to the piece and interact with it your own way. This is an idea that interests me very much, and is inspired by the work of Brian Eno and Peter Chilvers in generative music apps, as well as Max Mathews’ ideas of active listening by conducting, and Laurie Spiegel’s Music Mouse. Technology can be used to find new ways to listen to music, not just create it. And a musical piece can be co-created by the composer and the listener, connected through technology.

Connections Through Music

What resonates with me most in Magenta’s vision is the furthering of the creative reach of musicians, and allowing them to “extend, not replace their processes”. This is what I’ve been trying to do with these experiments.

To me, music is not a “problem to be solved” by AI. Music is not a problem, period. Music is a means of self expression. It’s a way for you to project your inner life into the world, a way for creators to connect with listeners, and a way for listeners to connect with each other. What AI technologies are for, then, is finding new ways to make these connections. And never before has it been this easy to search for them. Feel free to click through to any of the Codepens embedded on this page to find the code, fork it, and start playing!