• Human Learning What WaveNet Learned from Humans

    • Sageev Oore ( osageev )

    (or Learning Music Learned From Music)

    A few days ago, DeepMind posted audio synthesis results that included .wav files generated from a training data set of hours of solo piano music. Each wave file (near the bottom of their post) is 10 seconds long, and sounds very much like piano music. I took a closer look at these samples.

    Read full post.
  • Magenta MIDI Interface

    • Adam Roberts ( adarob )

    The magenta team is happy to announce our first step toward providing an easy-to-use interface between musicians and TensorFlow. This release makes it possible to connect a TensorFlow model to a MIDI controller and synthesizer in real time.

    Don’t have your own MIDI keyboard? There are many free software components you can download and use with our interface. Find out more details on setting up your own TensorFlow-powered MIDI rig in the README.

    Read full post.
  • Generating Long-Term Structure in Songs and Stories

    • Elliot Waite ( elliotwaite )

    One of the difficult problems in using machine learning to generate sequences, such as melodies, is creating long-term structure. Long-term structure comes very naturally to people, but it’s very hard for machines. Basic machine learning systems can generate a short melody that stays in key, but they have trouble generating a longer melody that follows a chord progression, or follows a multi-bar song structure of verses and choruses. Likewise, they can produce a screenplay with grammatically correct sentences, but not one with a compelling plot line. Without long-term structure, the content produced by recurrent neural networks (RNNs) often seems wandering and random.

    But what if these RNN models could recognize and reproduce longer-term structure?

    Read full post.
  • Music, Art and Machine Intelligence (MAMI) Conference

    • Adam Roberts ( adarob )

    This past June, Magenta, in parternship with the Artists and Machine Intelligence group, hosted the Music, Art and Machine Intelligence (MAMI) Conference in San Francisco. MAMI brought together artists and researchers to share their work and explore new ideas in the burgeoning space intersecting art and machine learning.

    Read full post.
  • Reading List

    • Cinjon Resnick ( cinjon )

    Magenta’s primary goal is to push the envelope forward in research on music and art generation. Another goal of ours is to teach others about that research. This includes disseminating important works in the field in one place, a resource that if curated, will be valuable to the community for years to come.

    Read full post.
  • A Recurrent Neural Network Music Generation Tutorial

    • Dan Abolafia ( danabo )

    We are excited to release our first tutorial model, a recurrent neural network that generates music. It serves as an end-to-end primer on how to build a recurrent network in TensorFlow. It also demonstrates a sampling of what’s to come in Magenta. In addition, we are releasing code that converts MIDI files to a format that TensorFlow can understand, making it easy to create training datasets from any collection of MIDI files.

    Read full post.
  • Welcome to Magenta!

    Douglas Eck ( douglaseck )

    We’re happy to announce Magenta, a project from the Google Brain team that asks: Can we use machine learning to create compelling art and music? If so, how? If not, why not? We’ll use TensorFlow, and we’ll release our models and tools in open source on our GitHub. We’ll also post demos, tutorial blog postings and technical papers. Soon we’ll begin accepting code contributions from the community at large. If you’d like to keep up on Magenta as it grows, you can follow us on our GitHub and join our discussion group.

    Read full post.