Many of the generative models in Magenta.js require music to be input as a symbolic representation like MIDI, But what if you only have audio?
We have just finished porting our piano transcription model, Onsets and Frames to JavaScript using TensorFlow.js and have added it to the Magenta.js library in v1.2. Now you can input audio of solo piano performances and have them automatically converted to MIDI in the browser.
Try out the demo app Piano Scribe shown below to see the library in action for youself. If you don’t have recordings of a piano handy, you can try singing to it, and it will do its best!
Learn how to use the library in your own app in the documentation and share what you make using #madewithmagenta!