THE FUTURE OF MUSIC-MAKING AI
Article by Christopher Rucks
As a kid, I loved sci-fi. My version of the boogeyman was Skynet, the Terminator, or one of the many emotionless movie machines programmed to extinguish life. Despite those childhood fears, I’ve managed to survive into adulthood. But now, my childhood fears of emotionless machines are returning.
Last year, CNN Money published an article entitled “Popular YouTube Artist Uses AI to Record New Album,” detailing the revolutionary song production process of American Idol alum, Taryn Southern.
“Southern has only basic piano skills, so she turned to the program to deliver the instrumental part of the song. The A.I. developed the harmonies, chords and sequences.”
I felt a slight flurry of panic in my intestines when I read that sentence. Taryn, a 2004 American Idol contestant, collaborated with AI software developed by production music startup, Amper Music, to create the underlying music composition beneath “Break Free,” the single to her forthcoming album. In fact, AI will form the production foundation for her entire album, I AM AI.
In the article, Taryn espoused the benefits of working with the AI platform:
“In a funny way, I have a new song-writing partner who doesn’t get tired and has this endless knowledge of music making….” She also said that “writing the structure of the song is now 20 times faster than when relied on human musicians.”
That slight flurry escalated into full-on hammering heart palpitations. To boot, the single won some approval from critics, including music and tech enthusiasts who’ve been keeping tabs on such futuristic collaborations with wide, eager eyes.
I nonchalantly tossed that article into my browser’s bookmark void until I ran into yet another journalist probing into the evolving relationship between music, man, and machine.
Playboy’s September/October 2017 music issue also dedicated some ink to the subject of music and AI. Writer Aaron Carnes used his journalistic dexterity to walk the reader through the menagerie of music-making AI, speaking with the industry’s top innovators of old and new and sampling today’s music-creating automatons in the process.
Carnes visits Google’s San Francisco HQ to check out Project Magenta, “an open-source endeavor that uses artificial intelligence and machine learning to create tools for artists.” He sits in on a jam session with a team of Magenta’s engineers who double as musicians and an unlikely band member, a platform named A.I. Duet. The musicianeers input melodic phrases, which Duet processes before spitting out new phrases based on its analysis.
“What we come up with is okay, but I’m more impressed with the bass line coming out of A.I. Duet…. Musicians are often strapped for fresh ideas, and this program seems perfect for spitting out an unending supply of them,” Carnes says.
Carnes’s piece expounds on the shift in interest from academia to a commerce, with intrigued tech players getting behind musical AI research. A host of startups and established giants are either creating AI, investing in it, or peering intently over the shoulder of those who are doing either.
For example, he elaborates on Jukedeck, a company leveraging AI to “help would-be musicians write songs without having to learn to play an instrument.” Jukedeck’s co-founder, Patrick Stobbs, sites Instagram’s democratization of photography as an inspiration for his own platform.
In the final section, Carnes speaks with David Cope, a godfather of music AI, who has a long history of inventing zany music-making programs. Cope argues that the creation of music has never been more than idea theft — creators have always “stitched” together influential pieces of music fabric to weave new works; he designs his software the same way. When speaking about the concerns of AI and whether it will put composers and producers out of work, Cope says: “They do what we tell them to do…. They have no self-awareness. They have no consciousness….”
Carnes wraps with final thoughts from Jason Freidenfelds, senior communications manager at Google:
“…AI’s impact in music will exceed any one technological advancement…. It may be as big a deal as the original shift from people making music primarily with their own bodies to crafting instruments that had their own unique properties and sounds.”
I finished Carnes’ Playboy piece wholeheartedly appreciating his sense of awe at the impending era of artificial musicianship, but also relating to the underlying current of cautiousness snaking through it.
First, I am not a current music producer — I’m more of a defunct one. I’ve exchanged keyboards, synth for laptop. Yet, as the author of Don’t Make Beats Like Me, a book designed to assist music producers in their worthy yet difficult music journey, this subject makes my palms sweaty.
I feel compelled to sneak into the offices of the leading music AI companies and toss powerful magnets into the server rooms housing all of the code that animates their digital, musical marionettes. I jest. But I am concerned. Ideas begun with the best intentions can have disastrous consequences:
“Right now, they’re not sure how the technology will impact music or if it will be used as they intend.” — Aaron Carnes
Last year, Tesla unveiled a self-driving semi truck equally sexy as it is sophisticated. The internet gasped, marking the truck as a bold step toward the sci-fi future we’ve been promised. Immediately after, journalists questioned what will be in store for the truck-driving industry and its human drivers. Japan, suffering a shortage of workers, is pioneering the rise of fast food machine automation. What happens when Japan’s wave of burger flipping cyborgs infiltrate the United States? Uber and Lyft drivers are wondering when self-driving cars will drop bombs of obsolescence on their gig economy hustle.
In that same spirit, what’s going to happen to an already beleaguered industry of music producers and composers when companies start churning out competent software that threatens their place in the music-production process?
I use the term beleaguered because producers and songwriters are still trying to navigate this latest iteration of the music economy with streaming as its engine. When spotlighting the disruption in music, we usually illuminate the plight of artists — artists who can supplement income through touring and merch. Less discussed are the producers and writers whose dwindling incomes are confined to the stream and the sale. While these creators twist at this Rubik’s Cube of music income, the tech industry is now tossing into the dilemma a live grenade in the form of music-making AI.
There is also surplus. An abundance of music makers, enabled by improvements to technology and simultaneous reductions to technology costs, created a surplus workforce of MPC mashers and keyboard finger wigglers.
In response to this surplus workforce, some have unleashed their inner salesmen on other producers, promoting drums and sounds to their competition. Others have shifted goals from landing records with Rihanna to landing syncs with Reebok. This is the evidence of an innovative, yet stressed market of creators.
And the imminent arrival of this tech only poses more troubling questions to wrestle with:
- Will songwriters cozy up to algorithms instead of producers?
- Is there a correlation between an increase and/or a dependency on technology and a decrease in the quality?
- Will we lose the spontaneity of two or more humans in a room bringing their own musical perspective to the creation process?
- Will future producers eschew the investment into theory and instrument playing, considering them as wastes of time better spent updating Instagram?
So many of these questions are unanswerable until time lurches forward and reveals its kept secrets.
I have to admit it: hypocrisy lives within these paragraphs. Technology is responsible for the advancement of music as we know it. Technology has served as the midwife for every musical leap forward, from the construction of instruments for classical music to the development of samplers for hip hop. Tech opened the door to modern-day recording. Tech was the passageway from phonograph records to “1,000 songs in your pocket.” And every piece of tech was likely bastardized upon its introduction to the world. Tribal musicians who were churning out beats on their bellies and thighs likely thought of setting fire to the hut of the guy who came up with the drum.
Proponents of musical AI suggest that all of this worrying is fruitless: it is merely the digital equivalent of having another musician in the room with which to bounce ideas. But don’t we want music makers to dip their ladle into the vat of human creative genius the way that great musicians of the past did: Jimi from Little Richard, MJ from Stevie Wonder, Pharrell from Teddy Riley?
All we know is this: the inexorable bullet of technology won’t be stopped. But can there be a middle ground? Can music-making humans and machines work together without destroying one another? We won’t know until after whatever will happen has happened. And if things go wrong, we’ll only get to look back regretfully at the time before Pharrell’s consciousness was uploaded to a box that told producers what notes to press. Regardless of the answers to these many questions, I believe that it’s only a matter of time before Skynet drops a Christmas album.