The world’s largest music company has entered the AI industry. Last year, Universal Music Group (UMG), along with Warner Records and Sony Music Entertainment, sued two AI music startups for allegedly using their recordings to train text-to-music models without permission.
However, last month UMG announced a partnership with one of the defendants, Udio, to develop an AI music platform. Their joint press release promised the label would “do what’s right by UMG’s artists.” But the Music Artists Coalition advocacy group responded skeptically, stating: “We’ve seen this before – everyone talks about ‘partnership,’ but artists end up with scraps.”
This lawsuit is among dozens in US courts where artists, publishers, and studios argue that using their work for AI training violates copyright. Judges are grappling with how to apply copyright law to technology that challenges traditional authorship concepts. For many, this represents both a legal and ethical issue. In the Andersen v Stability AI case – one of the first class-action lawsuits concerning AI image generators – artists claim using their work without credit, payment, or consent “violates the rights of millions of artists.”
There’s no doubt creative workers are suffering from the AI boom, as generative AI replaces human creative labor. In January 2024, over a third of illustrators surveyed by the Society of Authors reported income loss due to AI, and one study predicts a 21% revenue decline for audiovisual creators by 2028.
In response, a new activist movement has united entertainment executives and artists against the tech industry through social media campaigns, crowdfunded lobbying, and lawsuits. The Human Artistry Campaign, founded on the principle that “AI can never replace human expression,” brings together creatives and executives to support legislation protecting artists from AI and big tech. However, some artists, creators, and civil liberties groups warn of another threat: big content companies.
What happens when well-intentioned creatives align with major media conglomerates that have historically exploited their labor and expanded copyright against public interest? While some artists justify this as an “enemy of my enemy” strategy, this approach fails if big content and big tech become allies.
Copyright lawyer Dave Hansen of the Authors Alliance argues that copyright lawsuits won’t protect artists from AI. Instead, they’ll lead to exclusive licensing deals between large media and tech companies, leaving everyone else out. History supports this cynical view – when streaming emerged, labels and studios profited while musicians, writers, and actors were left behind.
Will AI licensing be different? When Runway AI and Lionsgate made a licensing deal, United Talent Agency’s CEO Jeremy Zimmer asked whether artists involved in Lionsgate films would be compensated when their work trains AI models. In several multimillion-dollar publisher-AI company deals, authors received neither payment nor opt-out rights.
Even if US courts require tech companies to pay for AI training data, working artists likely won’t benefit. Creating a licensing system under current power imbalances could let media companies pressure artists to surrender training rights as employment terms – something voice actors have already faced. Mandatory licensing wouldn’t necessarily help artists either.Licensing requirements are reining in Big Tech. While giants like Google and OpenAI can afford to pay for data licenses, smaller open-source AI developers cannot. Ironically, using copyright to challenge Big Tech only strengthens its dominance.
Many proposals claiming to “protect artists” not only fail in that goal but risk harming both artists and the public. In the U.S., the NO FAKES Act—backed by major entertainment groups—aims to create a federal “digital replication right” to control nonconsensual AI copies of a person’s voice or appearance. However, civil liberties organizations like the Center for Democracy and Technology and the ACLU have raised concerns about the bill’s vague wording, inadequate free speech safeguards, and potential for misuse. The act would let individuals, including children, license their digital replica rights for up to a decade (five years for minors). It’s not hard to picture studio executives eagerly pressuring young artists to surrender control over their own identities.
Why do these solutions miss the mark? Because many copyright lawsuits, licensing schemes, and digital rights proposals are Trojan horses for big content companies. The Copyright Alliance, a powerful nonprofit that claims to represent the “copyright community,” pushes for strict copyright rules on generative AI. Although it professes to support individual creators, its board is filled with executives from media titans like Paramount, NBC Universal, Disney, and Warner Bros.
Why all the public coalition-building when the entertainment industry could simply strike lucrative deals with tech firms behind the scenes? Because Big Content relies on artists. Their media empires need artists’ work to profit, their lobbying efforts require artist backing to appear credible, and their new AI partners need artists’ creations.
This reality highlights a strategy that worries entertainment executives more than AI: organized labor. Unionized creative professionals, such as those in the Writers Guild and SAG-AFTRA, have won substantial AI protections through strikes and collective bargaining. Copyright is too outdated, rigid, and clumsy to determine the future of an already vulnerable creative workforce. If Big Content genuinely wanted to shield artists from AI, it would stop trying to sell their voices as training data and start heeding what they have to say.
Alexander Avila is a video essayist, writer, and researcher.
Frequently Asked Questions
Of course Here is a list of FAQs about the topic Major media companies are challenging AI but this isnt the underdog story they portray inspired by Alexander Avilas perspective
General Beginner Questions
1 What is this articlevideo about
Its about the legal battles between big media companies and AI companies It argues that the media giants are framing themselves as the little guys fighting a tech Goliath when in reality they are powerful corporations with their own motives
2 Why are media companies suing AI companies
They are suing primarily over copyright infringement They claim that AI models were trained on their articles stories and other content without permission or payment which they argue is illegal and devalues their work
3 What does this isnt the underdog story mean
It means we shouldnt see this as David vs Goliath The media companies are actually massive influential corporations themselves and they are using this narrative to gain public sympathy and strengthen their legal and business position
4 What is AI training data
AI training data is the massive amount of text images and other information that an AI model learns from To become knowledgeable AI systems like ChatGPT read billions of words from books websites and yes news articles scraped from the internet
Advanced Deeper Questions
5 If the medias content is publicly available why cant AI use it for training
This is the core of the legal debate Media companies argue that making content publicly available doesnt mean its free to use for commercial profit AI companies often rely on the fair use doctrine which allows limited use of copyrighted material for purposes like research and education but its application to AI training is untested in court
6 What are the real motivations for the media companies beyond copyright
Beyond protecting copyright their motivations likely include
Leverage for Licensing Deals They want to force AI companies into paying them for the use of their content creating a new revenue stream
Market Control They want to ensure their brands remain authoritative sources of information and arent replaced by AI
Competition They see AI as a direct competitor for audience attention and advertising dollars