• Learning from A.I. Duet

    Douglas Eck ( douglaseck )

    Google Creative Lab just released A.I. Duet, an interactive experiment which lets you play a music duet with the computer. You no longer need code or special equipment to play along with a Magenta music generation model. Just point your browser at A.I. Duet and use your laptop keyboard or a MIDI keyboard to make some music. You can learn more by reading Alex Chen’s Google Blog post. A.I. Duet is a really fun way to interact with a Magenta music model. As A.I. Duet is open source, it can also grow into a powerful tool for machine learning research. I learned a lot by experimenting with the underlying code.

    Read full post.
  • Magenta wins "Best Demo" at NIPS 2016!

    • Adam Roberts ( adarob )

    The Magenta team is very proud to have been awarded “Best Demo” at the Neural Information Processing Systems conference in Barcelona last week.

    Here is a short video of the demo in action at the Google Brain office:

    Read full post.
  • Tuning Recurrent Neural Networks with Reinforcement Learning

    • Natasha Jaques ( natashamjaques )

    We are excited to announce our new RL Tuner algorithm, a method for enchancing the performance of an LSTM trained on data using Reinforcement Learning (RL). We create an RL reward function that teaches the model to follow certain rules, while still allowing it to retain information learned from data. We use RL Tuner to teach concepts of music theory to an LSTM trained to generate melodies. The two videos below show samples from the original LSTM model, and the same model enchanced using RL Tuner.


    Read full post.
  • Multistyle Pastiche Generator

    • Fred Bertsch ( fredbertsch )

    Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur have extended image style transfer by creating a single network which performs more than one stylization of an image. The paper[1] has also been summarized in a Google Research Blog post. The source code and trained models behind the paper are being released here.

    The model creates a succinct description of a style. These descriptions can be combined to create new mixtures of styles. Below is a picture of Picabo[5] stylized with a mixture of 3 different styles. Adjust the sliders below the image to create more styles.

    30%
    35%
    35%

    Read full post.
  • Human Learning What WaveNet Learned from Humans

    • Sageev Oore ( osageev )

    (or Learning Music Learned From Music)

    A few days ago, DeepMind posted audio synthesis results that included .wav files generated from a training data set of hours of solo piano music. Each wave file (near the bottom of their post) is 10 seconds long, and sounds very much like piano music. I took a closer look at these samples.

    Read full post.
  • Magenta MIDI Interface

    • Adam Roberts ( adarob )

    The magenta team is happy to announce our first step toward providing an easy-to-use interface between musicians and TensorFlow. This release makes it possible to connect a TensorFlow model to a MIDI controller and synthesizer in real time.

    Don’t have your own MIDI keyboard? There are many free software components you can download and use with our interface. Find out more details on setting up your own TensorFlow-powered MIDI rig in the README.

    Read full post.
  • Generating Long-Term Structure in Songs and Stories

    • Elliot Waite ( elliotwaite )

    One of the difficult problems in using machine learning to generate sequences, such as melodies, is creating long-term structure. Long-term structure comes very naturally to people, but it’s very hard for machines. Basic machine learning systems can generate a short melody that stays in key, but they have trouble generating a longer melody that follows a chord progression, or follows a multi-bar song structure of verses and choruses. Likewise, they can produce a screenplay with grammatically correct sentences, but not one with a compelling plot line. Without long-term structure, the content produced by recurrent neural networks (RNNs) often seems wandering and random.

    But what if these RNN models could recognize and reproduce longer-term structure?

    Read full post.
  • Music, Art and Machine Intelligence (MAMI) Conference

    • Adam Roberts ( adarob )

    This past June, Magenta, in parternship with the Artists and Machine Intelligence group, hosted the Music, Art and Machine Intelligence (MAMI) Conference in San Francisco. MAMI brought together artists and researchers to share their work and explore new ideas in the burgeoning space intersecting art and machine learning.

    Read full post.
  • Reading List

    • Cinjon Resnick ( cinjon )

    Magenta’s primary goal is to push the envelope forward in research on music and art generation. Another goal of ours is to teach others about that research. This includes disseminating important works in the field in one place, a resource that if curated, will be valuable to the community for years to come.

    Read full post.
  • A Recurrent Neural Network Music Generation Tutorial

    • Dan Abolafia ( danabo )

    We are excited to release our first tutorial model, a recurrent neural network that generates music. It serves as an end-to-end primer on how to build a recurrent network in TensorFlow. It also demonstrates a sampling of what’s to come in Magenta. In addition, we are releasing code that converts MIDI files to a format that TensorFlow can understand, making it easy to create training datasets from any collection of MIDI files.

    Read full post.
  • Welcome to Magenta!

    Douglas Eck ( douglaseck )

    We’re happy to announce Magenta, a project from the Google Brain team that asks: Can we use machine learning to create compelling art and music? If so, how? If not, why not? We’ll use TensorFlow, and we’ll release our models and tools in open source on our GitHub. We’ll also post demos, tutorial blog postings and technical papers. Soon we’ll begin accepting code contributions from the community at large. If you’d like to keep up on Magenta as it grows, you can follow us on our GitHub and join our discussion group.

    Read full post.