A 90-second piano melody with only four notes as a base was generated by one of Google’s machine learning systems, becoming the first work produced by Google’s Magenta program, which is designed to use machines to create art and music systems.
If you missed out on Google Magenta’s first piano recital, you can give it a listen here.
While the minute-and-a-half long melody may not turn out to be the summer jam of 2016, it is the first musical art developed with an algorithm. The drums and percussion were not created by Magenta, but added later for emphasis. The algorithm uses a trained neural network that harnesses electronic networks of “neurons” that are based on the structure of the brain.
The Magenta program was built off open-source machine learning software called TensorFlow. In a blog post, research scientist Douglas Eck describes the goals of Magenta as being twofold.
“First, it’s a research project to advance the state of the art in machine intelligence for music and art generation,” he wrote. “Second, Magenta is an attempt to build a community of artists, coders and machine learning researchers.”
Magenta’s goal of bringing the arts and sciences together is supported by Google’s Artists and Machine Intelligence (AMI) program, which aims to sponsor collaborations.
The end result of Magenta remains to be seen, as the technology is described as still being “in its infancy.”
However, the team hopes that this is just the beginning, saying, “We’ll start with audio and video support, tools for working with formats like MIDI, and platforms that help artists connect to machine learning models.”
“We want to make it super simple to play music along with a Magenta performance model.”