Project Magenta: now AI is making music
A cursory run through today's most popular music leaves one with a feeling that the music industry is factory producing songs. Generic baselines, predictable drops, inevitable rap cameos and lyrics that could've been written by a third grader seem to indicate that creativity is something that is rapidly dying out.
It's so formulaic that you'd imagine machines would be able to do as good a job, if not better, in the near future. Well, if the Google Brain team and its Project Magenta has anything to do with it, the next big hit song may just be the product of artificial intelligence.
Going by the website, Project Magenta is Google's attempt to explore whether artificial intelligence (AI) and machine learning can be used "to create compelling art and music". While AI has long been used for things like speech and image recognition, Project Magenta is Google's attempt to use AI to create instead. According to Magenta scientist Douglas Eck's blog post, "[Google's]developing algorithms that can learn how to generate art and music, potentially creating compelling and artistic content on their own."
Evidently, the answer to Google's question about whether machines were capable of producing music is a solid yes. The algorithm, which runs on Google's open-source AI TensorFlow, managed to construct a piano melody that was just under 2-minutes-long with just a prompt of four notes.
While the composition itself was a fairly basic one, unlikely to win any Grammy's or other awards, the achievement itself is fairly momentous. Because, while programs have been used to generate 'music' before, it has lacked any form of continuous structure, as it might have had it been composed by a person. "So much machine-generated music and art is good in small chunks, but lacks any sort of long-term narrative arc," writes Eck in the same blog post. Google's latest attempt though, manages to overcome this problem.
Yes, Project Magenta's first 'song' was simple, with a drum track even being added later for texture. But it does have some sort of build up and progression. Far more so than a newbie piano player might have if he/she were given just a set of four notes and asked to construct a song and definitely more structured than anything machine learning has produced autonomously in the past.
What's more, Project Magenta's use of neural networks - the same concept it used to create the Go-playing AlphaGo system - means that the program gets better at the task with each attempt. So expect its compositions to only get better from here.
Since Google has never really been known for taking things slowly, they're also trying to use a hive mind mentality to speed up the development of Magenta. "Our goal is to build a community where the right people are there to help out. If the Magenta tools don't work for you, let us know. We encourage you to join our discussion list and shape how Magenta evolves. We'd love to know what you think of our work - as an artist, musician, researcher, coder, or just an aficionado," reads a statement from Google.
In keeping with this, they will release their models and tools for free on GitHub to allow coders to improve the algorithm. Not just that, in their attempts to create a space for coders and musicians to come together and collaborate, Magenta is also inviting musicians on board.
Given that musicians would otherwise find it difficult to use the platform, Google is aiming to provide audio and video support, as well as tools for MIDI users and interfaces that will better help musicians interface with machine learning models.
We've already seen, as with the recently concluded robot art contest, that machines can indeed create art. Now, with the advent of Project Magenta, we can look forward to a future where AI produces an endless array of interesting music. Whether it'll be less generic, unintelligent and monotonous than today's music is anyone's guess, but we'd like to think that's impossible. We're looking at you, Justin Bieber.