Music By AI – A Warning Label Is Now Required

AI ADVISORY

Last week, The Verge asked the question, “AI is capable of making music, but does that make AI an artist?” Wow, is that the wrong question. First, you need to define art. I’m sure you have your own definition (it’s personal), but here’s mine: Art cannot be ignored. Art is not to be confused with craft (which may contain the work of artists). Craft can easily be ignored. The distinction between “art” and “craft” is critical. One has magical qualities that touch your soul; the other is just “nicely done.” That said, neither art nor craft should ever be confused with technical skill; just because you can type 120 words per minute does not mean you can write a novel.

The Verge article explored the evolution of human/AI copyright ownership. “What happens if AI software trained solely on Beyoncé creates a track that sounds just like her?” It is also the wrong question. Every human learns music by doing everything humanly possible to mimic other musicians (starting with their own music teachers). How would artificial musical intelligence learn music without “listening” to other musicians? And, other than attempting to create a musical Jurassic Park, why would you ever train any musician (AI or human) on only one other musician’s work? A quick answer might be to capitalize on the creative genius of one particular musical star, like Beyoncé.

What you may not know is that there are literally hundreds (probably thousands) of human composers and producers who can convincingly create a track that even Beyoncé would think she wrote, performed, and produced. Great music is everywhere. Sadly, it’s not the part of the music business that actually sells music. Beyoncé is a package that includes brains, marketing, business, organizational, social, and human skills all wrapped up in a beautiful bundle of extraordinary talent. Her business acumen, style, smile, manner, musicianship, dancing, and relationships all contribute to her success.

If you created the music in a vacuum – no pictures, no dancing, no fashion, no social media, no power couple, no hype – just music, no one would care that any particular song sounded like her songs sound. In other words, if I do a perfect Beyoncé knock-off and put it on YouTube, there are absolutely no guarantees of success. In fact, because it would be so close to something we’ve heard a lot of, it would probably have less chance of finding an audience.

What Are We Really Dealing With?

Can AI create “real” music? Has been asked and answered. David Cope is Dickerson Emeriti Professor and teaches theory and composition at the University of California at Santa Cruz. He has worked on many different music creation algorithms (perhaps his most famous is EMI – Experiments in Musical Intelligence). Cope used EMI to compose Zodiac, twelve short works for string orchestra in the style of Vivaldi. Here’s a link to one movement called Taurus. This was done back in 2012 – all musical AI has improved exponentially since that time.

The Music Business

When you think of the music business, you think of your favorite singers, musicians, musical entertainers, favorite kinds of music. But there’s another side of the music business, the commercial side. The vast majority of music you hear in movies, on television, and in commercials is in the background. It is there to evoke a feeling. Sometimes, there’s a “sonic signature” (aka sonic branding) – the new fancy words for “the hook.” In some movies (ones we love), music is a character. In others, musical themes are closely associated with a character (think John Williams’s Darth Vader theme from Star Wars: Episode V – The Empire Strikes Back, or his main theme from Jaws). But most of the time, the job of the musical score is to be supportive of the storyline. It is a tool that many directors and producers use brilliantly (and that some truly suck at).

Either way, the music composed and produced for such uses is manufactured factory-style. It is a seriously cost-competitive business at the bottom, there is no middle, and the high-end is so insecure with its abilities to pick the right music that buyers simply overpay popular artists and bask their projects in the halo of stars.

This is a long way of saying, AI can do over 90 percent of the production work needed in the commercial music business. Can it write a symphony? Yes. Could it write a symphony that people would buy tickets to hear or stream or download? When is the last time you went to hear a symphony perform (other than at a “Mostly Mozart” concert)? But AI could easily compose a Vivaldi-like (Italian Baroque) piece that, if used as transitional music in a documentary or under dialogue and sound effects, would more than do the job. Could a lifelong, professional musician tell that piece was written by AI? Maybe. It depends on too many factors to go into here. Could a discerning audience? Highly doubtful. In fact, I’m going to put a flag in the ground here and simply say no. If they were not informed (warned), no general audience would have a clue as to how the track was created, nor would the audience care.

The reason is simple. The vast majority of professional composers and producers (even the most successful ones) are journeymen, not Vivaldi and certainly not Mozart. And with all due respect, Beyoncé is an entertainer who uses music as her preferred medium to entertain you. As exciting and wonderful of an artist as she is (I am a fan), she is not in the musical company of Vivaldi or Mozart – and Williams, who is one of the greatest film composers of all time, would exclude himself from being mentioned in the same category as Mozart (although I think Vivaldi could have learned a thing or two from Williams).

I’ve been composing and producing music professionally for over 50 years. I can tell you with extreme confidence that the business of music is about to evolve exponentially. Costs of production are going to approach zero. Copyright law is going to be turned on its head, and royalty distribution for all but the most popular songs (because of the way the performing rights societies rank song credits) are going to have to be seriously adjusted.

In another article, we’ll go into case studies about musical human/machine compositional and production collaborations and pure musical AI systems doing work you’d think had to be created by inspired humans who had practiced music their entire lives. Until then, here’s a random sampling of a few different approaches to musical AI. They work surprisingly well, and they get better every day (because they never stop practicing).

Amper Music

Amper offers the ability to “compose custom music in seconds and reclaim the time spent searching through stock music” with its Score product. Whether or not you have a background in music production, Amper promises ease of use in finding the music “that fits the exact style, length, and structure you want.” Amper also offers an API that lets you integrate its music composition capabilities into your own tools. You can hear some of its music here.

IBM Watson Beat

Watson Beat is a cognitive cloud-based app being developed by machine learning and artificial intelligence expert Janani Mukundan, who teamed up with Richard Daskas, a composer and professional musician, to teach Watson about rhythm, pitch, and instrumentation, as well as the differences in genres. Watson’s neural network then helps artists (like Alex da Kid) create original compositions.

Jukedeck

Jukedeck is based on state-of-the-art technology built to bring artificial intelligence to music composition and production. They are training deep neural networks to understand music composition at a granular level, so that they can build tools to aid creativity.

Google NSynth: Neural Audio Synthesis

Google’s NSynth “uses deep neural networks to generate sounds at the level of individual samples,” which offers artists control over the timbre and dynamics of the music they create, as well as a level of control and experimentation that a hand-tuned synthesizer cannot offer.

Sony’s Flow Machines

Flow Machines uses machine learning and signal processing technology to assist in the creation of music. Its core component, Flow Machines Professional, can be used to create custom melodies using Flow Machines’ music rules, which were created via music analysis.

These are just a few of the AI systems being trained as co-composers, co-producers, and co-performers, or as flat-out replacements for human musicians. Will you be able to tell the difference between musical works done completely by AI, co-created works, and pure human works? This is not a question for the future. Most people can’t tell the difference now.

So, get ready for musical warning labels on video games, movies, TV shows, videos, retail shops, malls, elevators, dance clubs, bars, restaurants, hotel lobbies, and other places that use voluminous amounts of commercial music that say something like,

  • WARNING! The music you are about to hear and the audio mix you are about to experience are being composed, produced and performed in real time to fit your mood and match your personal preferences and listening environment by an AI system called AutoScoreAI-2. We have previously sought, and you have granted, permission for us to collect and analyze data generated by your audio and video consumption behavior for the quality of your enjoyment. No humans were harmed during this production.

Or were they?

Shelly Palmer is Fox 5 New York's On-air Tech Expert (WNYW-TV) and the host of Fox Television's monthly show Shelly Palmer Digital Living. He also hosts United Stations Radio Network's, ...

more
How did you like this article? Let us know so we can better customize your reading experience.

Comments

Leave a comment to automatically be entered into our contest to win a free Echo Show.