Google Creative Lab just released A.I. Duet, an interactive experiment which lets you play a music duet with the computer. You no longer need code or special equipment to play along with a Magenta music generation model. Just point your browser at A.I. Duet and use your laptop keyboard or a MIDI keyboard to make some music. You can learn more by reading Alex Chen’s Google Blog post. A.I. Duet is a really fun way to interact with a Magenta music model. As A.I. Duet is open source, it can also grow into a powerful tool for machine learning research. I learned a lot by experimenting with the underlying code.

A couple of days ago, I grabbed the A.I. Duet code from the Google Creative Lab GitHub and did a bit of hacking. Cheers to Yotam Mann ( tambien) and collaborators for such a clear and easy-to-use code repository! First, I added the ability to drop in trained “bundles” from different Magenta music models. Second, I added support for playing back polyphonic sequences from Magenta. You can find the code for this small example in my GitHub. Then I spent a few hours investigating how it all worked.

How does this support research?

I learned a lot from the ability to switch between Magenta models. It was immediately useful to feed the same melody into different models and compare the music. More importantly, the responsiveness of A.I. Duet made it possible to play with Magenta in real time and receive immediate feedback. I’ve always been a fan of guitar pedals. They’re durable and simple. I love the ease with which a guitarist can control them with the foot while continuing to play. A.I. Duet is the closest I’ve come to feeling that way about Magenta: it made it so easy to start over, play around, try new ideas, and have fun breaking the model.

Magenta as easy to use as a set of guitar pedals… something to strive for.
Guitar pedals

A.I. Duet also showed how challenging it is to support live interaction between a musician and a Magenta model. Our current models are not designed to flexibly accommodate tempo changes and are not responsive to expressive timing and performance dynamics. This means the model sometimes drops notes or incorrectly interprets their relative length. I also learned that the parameter governing the amount of time A.I. Duet waits for the musician to stop playing actually has a great effect on the interaction. When it’s too short, you don’t get the chance to complete a musical phrase before Magenta responds. When it’s too long, you’re left waiting.

Luckily, it’s really easy to hack Magenta and A.I. Duet (or at least I think so). Those of us on the Google Brain team who work on Magenta will continue to collaborate with the Google Creative Lab team on experiments like this. We’ll also think about how to best extend the functionality of A.I. Duet to address even more research questions surrounding creativity and computation. We invite the rest of the music, art and coding communities to contribute to this effort!