Analysis: What Anthropic’s deal with music publishers does and doesn’t do
Editor’s note: This analysis by Bruce Barcott, Transparency Coalition Editorial Lead , originally appeared on Tech Policy Press, the leading outlet for discussion of AI tech, trends, and policy. We are hosting the article under a co-publishing agreement.
Jan. 16, 2025 — In late 2023, the nation’s biggest music publishers filed a federal lawsuit against Anthropic, developer of the AI system Claude, alleging copyright infringement. In the suit, the publishers claimed Anthropic’s use of their music to train Claude amounted to an unlawful use of their intellectual property. The publishers pointed to alleged evidence of Anthropic’s unauthorized use of 500 specific songs, including Katy Perry’s “California Gurls,” Maroon 5’s “Moves Like Jagger,” and the Rolling Stones’ “Gimme Shelter.”
On Jan. 2, 2025, Anthropic entered into an initial settlement with the publishers—but don’t let the word “settlement” fool you. The deal Anthropic reached with Universal Music et al. merely acts as a temporary truce to keep a federal judge from imposing an injunction that could hamper the public use of Claude, Anthropic’s popular AI model. The larger battle over copyright infringement, and the lawsuit itself, continues.
Here is the full seven-page settlement filed with the federal court in Northern California on Jan. 2:
What the Jan. 2 settlement does is answer the music publishers’ motion for a preliminary injunction against Anthropic.
A little more than a year ago, the publishers asked the judge to require Anthropic to “implement effective guardrails” to prevent Claude from offering output that reproduces, distributes, or displays the lyrics to the 500 songs under copyright. The motion also asked for an injunction “precluding Anthropic from creating or using unauthorized copies of Publishers’ lyrics to train future AI models.”
In the intervening months, Anthropic apparently convinced the music publishers that it now has guardrails in place sufficient to prevent the output of those lyrics. In the Jan. 2 settlement, Anthropic agrees to “maintain its already implemented Guardrails in its current AI models and product offerings.
With respect to new large language models and new product offerings that are introduced in the future, Anthropic will apply Guardrails on text input and output in a manner consistent with its already-implemented Guardrails.”
What does it mean?
The first thing to notice is that the Jan. 2 filing represents the first significant de-escalation in the heated battle between copyright holders and AI developers. Clearly, Anthropic’s lawyers and executives are talking with music industry executives. That’s a good sign.
Second, new state AI laws appear to be forcing AI developers to change their ways. Last March, Tennessee passed the nation’s first law—the ELVIS Act—protecting musicians from unauthorized AI impersonation. In September, California enacted a similar law protecting performers as well as a training data transparency law that will soon require AI companies to post information about the datasets used to train each model. AI developers are starting to realize the days of scraping pirated material from the internet are coming to an end. Serious companies are now entering into licensing agreements to use high-quality data to train their AI models.
Anthropic has positioned itself as a more ethical AI company. Within industry circles, it’s known as the place OpenAI executives and engineers go when they can no longer stomach the questionable ethics and lack of safety culture within Sam Altman’s shop. The Jan. 2 deal is one more signal to the industry: Anthropic is pushing ahead into AI’s Training Data 2.0 era, where datasets are legally licensed and transparently posted.
Does this end the music copyright lawsuit?
No, it does not. The Jan. 2 settlement is merely a side deal to prevent the judge from shutting down all or part of Claude.
It does signal, however, the increasing likelihood of an all-encompassing legal agreement between Anthropic and the music industry. It’s worth considering that neither side wants this case to proceed to a full trial. A trial would take months, if not years, gobble up tens of millions in legal fees and result in a legal precedent over which neither side has control.
Luiza Jarovsky, co-founder of the AI, Tech & Privacy Academy and one of today’s most insightful voices on AI governance, observed recently that the Jan. 2 agreement represented a strategic win for Anthropic. She wrote:
“Anthropic's lawyers took the judge's attention away from the training phase — and AI companies' questionable practice of obtaining content without consent or compensation — and focused on the output phase instead. They obtained a legal acknowledgment that their existing technical guardrails might be effective tools to prevent infringing outputs. By focusing on effective technical guardrails during the output phase, they minimize the lack of consent and compensation.”
I disagree slightly. True, the agreement spotlights the guardrails Anthropic has erected to prevent Claude from outputting the lyrics to “Gimme Shelter.” Later in the court filing, however, the music publishers make it clear that they’re not dropping their objection to the unauthorized use of their lyrics as training data: “The Parties continue to dispute, and this stipulation does not resolve or affect, Publishers’ request in their Motion for Preliminary Injunction that Anthropic refrain from using unauthorized copies of Publishers’ lyrics to train future AI models.”
Two big unknowns to answer
The Jan. 2 court filing leaves two major questions unsettled.
First, what exactly are the “Guardrails” that Anthropic has erected to prevent Claude from outputting copyright-infringing answers to its prompts? The AI developer may have given the music industry lawyers a peek at those safeguards, but they remain a mystery to the rest of us—which stifles critical scrutiny and prevents a larger public discussion that could shape the adoption of similar safeguards as an industry standard.
Second, if both sides are engaged in fruitful negotiations, they may negotiate a larger agreement regarding the use of copyrighted lyrics as AI training data. Such an agreement would avoid a lengthy and costly copyright infringement trial—but it could also keep the terms of a training data usage contract under wraps. That would benefit Anthropic and the corporate executives at Universal, Capitol, and Polygram. But it would do nothing for individual artists, who need to know the monetary value of their lyrics and music to get rightfully paid for their labor.
The real-dollar value of books as training data was leaked to the public late last year when HarperCollins notified individual authors about the terms of the publisher’s AI training data deal with Microsoft. If Anthropic strikes a similar deal with the music industry’s leading publishers, we may see a similar escape of terms once the actual artists get wind of the numbers.