background animation of a pianoroll
magenta logo

Make Music and Art
Using Machine Learning

Get Started Demos Blog Research Community

What is Magenta?

An open source research project exploring the role of machine learning as a tool in the creative process.


Magenta is distributed as an open source Python library, powered by TensorFlow. This library includes utilities for manipulating source data (primarily music and images), using this data to train machine learning models, and finally generating new content from these models.


Magenta.js is an open source JavaScript API for using the pre-trained Magenta models in the browser. It is built with TensorFlow.js, which allows for fast, GPU-accelerated inference. If you're interested in seeing how Magenta models have been used in existing applications or want to build your own, this is probably the place to start!

Featured projects

Magenta Studio (beta)
Magenta Studio is a collection of music plugins built on Magenta’s open source tools and models.
Learn more.
Onsets and Frames
Transcribing piano with a neural network.
Learn more.
Creating palettes for blending and exploring musical loops and scores.
Learn more.
Making music using new sounds generated with machine learning.
Learn more.

What's new?

Porting Arbitrary Style Transfer to the Browser
Reiichiro Nakano describes how he contributed arbitrary image style transfer to Magenta.js using model distillation to improve performance in the browser. Read the blog post.
Music Transformer: Generating Music with Long-Term Structure
We present Music Transformer, a self-attention-based neural network that can generate music with long-term coherence. Read the blog post.
ML as Collaborator: Composing Melodic Palettes with Latent Loops
Catherine McCurry, a musician and a creative technologist with Google’s Pie Shop, writes about designing tools that help musicians make use of Magenta’s musical models. Read the blog post.
The MAESTRO Dataset and Wave2Midi2Wave
MAESTRO (MIDI and Audio Edited for Synchronous TRacks and Organization) is a new dataset composed of over 172 hours of virtuosic piano performances captured with fine alignment (~3 ms) between note labels and audio waveforms. Read the blog post.