The Magenta team is very proud to have been awarded “Best Demo” at the Neural Information Processing Systems conference in Barcelona last week.
Here is a short video of the demo in action at the Google Brain office:
The demo consists of several components:
- The Magenta library, which is built with TensorFlow and provides an API to to generate MIDI data with neural network models.
- The Magenta-MIDI interface, which provides an interactive communication layer between TensorFlow and MIDI devices.
- Models developed by the Magenta team and our collaborators, including a set of 6 neural networks: a basic one-hot melody RNN, a melody RNN with lookback, a melody RNN with attention, an RNN which had been fine-tuned with reinforcement learning, a drum RNN and a polyphonic RNN based on BachBot. Pre-trained versions of these models can all be downloaded on our GitHub.
- A visualization built by Google CreativeLabs.
- A MIDI keyboard and drum pad.
- An Ableton Live / MaxMSP interface that connects all of the above, so that participants can improvise with the models.
If you use this interface to produce music, aid a live performance, or even in your own AI jam session, we’d love to hear about it! Please drop us a line at firstname.lastname@example.org.