Magenta.js is a new JavaScript suite with a simple API for generating music and art with Magenta models. Since it’s built on TensorFlow.js, it runs right in the browser with WebGL-acceleration. Get started at or read more about it below!

Here is a simple demo we made with it that plays an endless stream of MusicVAE samples:


Over the last 2 years the Magenta project has been working to close the loop between machine learning researchers, creative developers, and traditional artists in our effort to enable and enhance the creative potential of all people with machine learning.

Throughout that time, a big takeaway for those of us working on the Google Brain team has been that focusing on our connection with the developer community yields tools and experiences that bridge the gap between our research and end users. With JavaScript, we see a big opportuniy for providing these developers the tools they need to build ML-enhanced creative interfaces.

JavaScript has become a more powerful and important language in recent years thanks to improvements in browsers, WebGL-accelerated linear algebra packages like TensorFlow.js, native-like experiences enabled by frameworks such as Electron and Progressive Web Apps, and code sharing websites such as

We previously explored the potential for open JavaScript APIs of our models when we shared a TensorFlow.js implementation of MusicVAE with some very talented developers, resulting in Latent Loops, Melody Mixer, and Beat Blender. These apps got us really excited, so we decided to buckle down on a more complete JavaScript suite, which we are calling Magenta.js.


The first package we are launching as part of this suite is @magenta/music, which includes implementations of many of our note-based music models such as MusicVAE, MelodyRNN, DrumsRNN, and ImprovRNN.

To make it as easy as possible for you to get started, we are also hosting weights from pre-trained models at along with config files that allow them to be automatically loaded with a single line of code:

const m = mm.MusicVAE(checkpointURL);

Because inference runs locally, the weights must be transferred to the client browser. Thus, these models were trained with the size of the checkpoints in mind and sometimes sacrifice accuracy to achieve a reasonable package size.


To put the API to the test, two of us took a stab at building our first web apps.

Endless Trios” by Adam Roberts

To demonstrate how simple it is to randomly generate trios from MusicVAE, I wrote the following snippet which you can paste into an empty .html file, open in your browser, and hear a never-ending stream of trios.

    <!-- Load @magenta/music -->
    <script src=""></script>

    <!-- Place your code in the script tag below. You can also use an external .js file -->
      // Instantiate the model by loading the desired checkpoint.
      const model = new mm.MusicVAE(
          '' +
      const player = new mm.Player();

      const start = () => {
        document.getElementById("start").style.display = "none";
        // Resume AudioContext on user action to enable audio.
          // Endlessly sample and play back the result.
          function sampleAndPlay() {
            return model.sample(1)
                .then((samples) => player.start(samples[0]))
  <body><button id="start" onclick="start()">Start</button></body>

However, I then discovered an (unjustly deprecated) HTML element called marquee and couldn’t help myself. The result is the “Endless Trios” demo at the top of the page. Sorry!

Since the weights for the full 16-bar trio model total 880MB, I trained a miniaturized version of the model on 4-bar sequences and quantized each weight to 8 bits, which reduces the total to less than 18 MB. Unfortunately this means the trios aren’t as long and won’t be quite as impressive as the full model samples, but I think they’re still compelling and sometimes REALLY GOOD!

Mindless Improv” by Ian Simon

I threw together a little demo of a model that improvises over chords. And it sometimes makes mistakes! There’s nothing fancy going on here as this is just a chord-conditioned melody LSTM, but the interface makes it quite easy to play around with different progressions.

A Building Block

Of course, there are many more available models and endless possibilities for creative interactions, which is where the broader Magenta community comes in. For example, Magenta contributor Tero Parviainen, who helped develop Magenta.js, has already built a collection of really exciting applications. One example is Latent Cycles, which uses both ImprovRNN and MusicVAE to produce a fun and meditative interaction:

Play with more of Tero’s creations and learn about his experience building and building with Magenta.js in his blog post.

What’s next?

We’re hoping you’d help us answer that question! Now that the core library is available we will continue to add models, but we need your help to make it as useful as possible for the diverse applications you dream up.

Please contribute to the repo and share your creations with the community by using the magentajs tag on and writing about it on our discussion list. If you use someone’s Magenta.js app to make some awesome music, share it with us as well!

We can’t wait to see and hear what you’ll do.

How to cite

If you extend or use this work, please cite the paper where it was introduced:

title = {Magenta.js: A JavaScript API for Augmenting Creativity with Deep Learning},
author = {Adam Roberts and Curtis Hawthorne and Ian Simon},
year = {2018},
booktitle = {Joint Workshop on Machine Learning for Music (ICML)}