

A federal judge has rejected music publishers’ attempt to expand their copyright lawsuit against Anthropic with claims that the AI company used pirated materials to train its Claude chatbot.
According to a court filing, US District Judge Eumi Lee last week (October 8) denied a motion by Universal Music Publishing Group, Concord Music and ABKCO Music to amend their 2023 complaint.
Bloomberg Law reported, citing a hearing in San Jose, that Judge Lee found the amendment to be inappropriate because the music publishers failed to investigate the piracy allegations based on Anthropic’s downloading of works.
The music publishers sued Anthropic in 2023 for copyright infringement, alleging that its Claude chatbot regurgitated copyrighted lyrics, suggesting that the company had trained the chatbot on those lyrics without permission.
In August, the music publishers argued that Anthropic hid the fact that it used BitTorrent to pirate copyrighted lyrics. The publishers stated that they discovered the issue through a separate copyright lawsuit against Anthropic.
A number of book authors sued Anthropic in 2023, alleging that Claude had been trained on their books without permission, and evidence of Anthropic using BitTorrent was presented in that case.
Most recently, Billboard reported that Anthropic called the proposed amendment an improper attempt to “fundamentally transform this case at the eleventh hour.” The company maintained that publishers had sufficient time to uncover the torrenting activity through standard legal discovery procedures.
While the latest decision by Judge Lee dealt a setback for the music publishers, they recently secured a positive development in the case. Also last week (October 6), Judge Lee ruled that they can press forward with claims that Anthropic bears legal responsibility when users of its Claude chatbot generate copyrighted lyrics.
The decision, which you can read in full here, marks the second attempt by Amazon andGoogle–backed Anthropic to dismiss the infringement claims. Lee previously granted a dismissal motion but allowed the publishers to refile amended complaints.
This time, the judge found the publishers adequately argued that Anthropic could have been aware of user infringement and profited from allowing it to continue by using “guardrails” designed to prevent copyright violations. The judge’s ruling addressed secondary liability and DMCA claims.
The lawsuit is part of a broader wave of litigation examining whether copyright law permits AI developers to train models on protected material. Tech giants Microsoft and Meta, and AI firms like OpenAI, Suno and Udio face similar legal challenges.
Anthropic is among the first AI companies to resolve such claims, reaching a $1.5 billion settlement with authors in September.