Record Labels Seek To Destroy AI Music Generators
One of the biggest lawsuits going down in the music industry today is the collective effort of the biggest music labels in a bold move to sue well known AI music generators Suno AI and Uncharted Labs, Inc. Universal, Warner and Sony have all charged that these companies have been illegally using copyrighted material to train their AI systems.
The AI companies claim they don’t allow users to run prompts that include artist names, but there are ways to get around that. For example, simply adding a space between letters in a name can help. The prompt, “m a r i a h c a r e y, contemporary r&b, holiday, Grammy Award-winning American singer/songwriter, remarkable vocal range” is allowable and will produce a clone of “All I Want For Christmas Is You” with an almost carbon-copy vocal to the artist and will actually copy the first two verses of the original song, the lawsuit alleges.1
The music label giants are suing for $150,000 per infringement, which could amount to millions. The RIAA (Recording Industry Association of America) is the trade organization2 that supports and promotes music labels commercially in the US. They are bringing the joint lawsuit, covering the three major labels. RIAA’s chairman and chief executive, Mitch Glazier, is quoted as saying, “unlicensed services like Suno and Udio that claim it’s ‘fair’ to copy an artist’s life’s work and exploit it for their own profit without consent sets back the promise of genuinely innovative AI for us all”.
Why is this becoming such a problem?
Other than outright stealing from artists, AI music generation opens up a new opportunity to turn the market for music sampling upside down. It could possibly alleviate the need to pay licensing fees to artists in order to use portions of songs in new work by the claim that an AI generated sample is close to, but not technically, the artist’s.
Copyrighted songs could possibly be used without consent under the “fair use” protective law to train AI through datasets, teaching the machines (along with society) that it’s okay to use identical copies of artists’ work as long as new work is “transformative”.
All this leads to an undercutting of music licensing in general, which can potentially cost artists a lot of money. Those generating AI music that sounds indistinguishable to celebrity artists could do so for free, then turn and make a profit. We’re living in a world dominated by social media and viral song clips.
So how does software detect that a song is AI generated?
It’s becoming increasingly more difficult to detect AI generated content. The rules aren’t as scientific as we think; originally, the Turing test was implemented to distinguish the difference between AI and human creation, but that was determined by how well a human being opinionatedly decided they couldn’t tell the difference after interaction with both. That’s not very scientific.
In the earlier detection software, a mistake in AI generation could be detected; for example, in a photo generation of a woman with earrings, one earring might be different than the other because the AI couldn’t remember matches across distances. Now all that is changing as AI learns faster than ever, and those detectors can no longer be used.
In music, we use Music Recognition Technology (MRT) and Automated Content Recognition (ACR). Right now, these programs can detect minor deviations by producing a digital fingerprint of a known AI song and comparing it to fingerprints of other content. They’re recognizing new uses of an artist’s song, a likeness of that artist, and they determine if a track is likely to be generated with artificial intelligence. It’s not a 100% catch. There are three main detection methods of operation:
Artifact detection, like the example from the photo detection of earrings, recognizes misplaced artifacts from a musical track; things that are “off-based from reality”.
Watermark recovery: Some AI detectors can identify watermarks from the AI generator program itself, embedded in the track and undetectable by human ears.
Voice ID detects works on tracks that use the likeness of a known artist, but it doesn’t work as well as its makers would like. If there isn’t enough biometric data (extracting voiceprint features from speech signals, detecting frequency spectrum and speed/tone of voice) then the software will give insufficient results.
In a nutshell, we’re already having trouble identifying AI generated music as a whole. As AI learning gets faster and faster, it will become even more difficult to detect. Using copyrighted material under “Fair Use” protections will end up flooding streaming platforms, easily devaluating artists’ music and making money off their backs in the long run. There are options out there for cooperation and co-existence. Let’s hope they can be found soon.