Skip to content


Anthropic Reaches “Landmark” Copyright Settlement with Authors that Leaves the AI/Copyright Issue in Legal Limbo  

Anthropic, the AI start?up behind the Claude large language model (LLM), has reached a preliminary settlement in a high?stakes class action lawsuit brought by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson. The authors alleged that Anthropic infringed their copyrights by using pirated and unauthorized copies of their books to train Claude. The settlement, which avoids a December trial, is set for finalization by September?3, 2025.  

Case Background & Why It Matters

  • I Discussed the Fair Use Ruling back in June?2025
    U.S. District Judge William Alsup issued a split summary judgment, finding that:
    • Training Claude using digitally scanned versions of legitimately purchased books was “exceedingly transformative” and qualifies as fair use.
    • However, use of pirated books, downloaded from shadow libraries like Books3 and LibGen and stored in a centralized internal library, was not considered fair use. This distinction set the stage for a high?stakes trial.  
  • Potential Damages
    With over 7 million works implicated, statutory damages (at up to $150,000 per infringement) could have reached billions, or even over $1?trillion, a term often cited by commentators to denote “doomsday” liability. 
  • Settlement Details
    No financial terms have been disclosed. The agreement avoids trial and still requires preliminary court approval, expected by September 3 per filings from both parties.  
  • Why It is “Historic”
    Attorneys representing the authors have hailed the deal as “historic”, signaling a major development in AI?related copyright law. Industry watchers anticipate this may influence how courts address similar disputes involving other AI giants, OpenAI, Meta, and Microsoft among them. 
  • Why It is not so “Historic”
    By settling the case, the authors have denied the world a legal precedent outlining just what is, and is not, legally allowed when it comes to training your LLM on third-party works. Without such a precedent as a guide, everyone creating an LLM runs the risk of being held liable for, what could turn out to be, an extremely large copyright damage award. This uncertainty hampers every industry that relies on AI and threatens to hinder future AI advancements.  

What You Should Know

InsightImplication for Your IP Strategy
Fair Use Does Not Excuse PiracyEven transformative use will not protect unlicensed, pirated content. Always secure rights from the start.
Transformative Training May Be DefensibleUsing purchased material for AI training may qualify as fair use—but remain cautious and document your process.
Class Actions Can Cripple IndustriesThe sheer scale of possible damages in AI/copyright disputes is sufficient to force settlements—even for startups. The cost of even getting to a jury may be enough to sink many AI startups. 
Watch your Settlement TermsWhile a settlement is often preferable to trial, be careful that you do not agree to anything that may act as an admission or precedent against you moving forward. 

How You Can Protect Yourself 

1. Audit Your Training Data
Work closely with your intellectual property lawyer to identify and differentiate properly licensed content (e.g., purchased or waived rights) from potentially infringing sources. A documented vetting process can be critical in fair use, infringement, and willfulness analyses.

2. Licensing Strategy Development
If your AI or digital product relies on third?party content, be sure to have an attorney specializing in intellectual property help negotiate your licensing agreements, to transform uncertainty into clarity and to mitigate litigation risks.

3. Navigate Emerging AI Precedents
As the AI legal landscape continues to shift, stay abreast of how courts are treating the use of copyrighted material to train AI, paying particular attention to what courts feel is, and is not, fair use when it comes to consumer?facing AI models.

4. Defensive Strategy Against Class Actions
Proactivity is your strongest shield. 

  • Document your content sources
  • Maintain written retention policies for lawfully acquired/licensed works
  • Avoid blind or blanket data ingestion without a written rights clearance

What’s Next?

  • Final Court Approval: The proposed secret settlement must receive preliminary approval by September 3. After that, the details may become public.
  • Class Member Response: Some authors eligible for the class have only recently received notice; their response to the terms could shape final outcomes.  
  • Broader Industry Impacts: While settlement avoids court precedent setting, similar cases, like those from music publishers, are underway and may provide more solid guidance to the industry. 

Final Word

While the Anthropic–Bartz settlement is not as “landmark” of an AI copyright case as the plaintiffs would have you believe, it does signal that major AI players are worried enough about potential copyright infringement liability to start writing checks to authors. It also indicates that AI players are likely to be more cautious in how they train their LLMs moving forward. Finally, it is clear that every major AI player is going to be working closely with their intellectual property counsel to maximize profits while minimizing liability in the untamed wild wild west that is AI copyright law. 

Brett Trout, Des Moines, Iowa

Related posts

Posted in AI, Artificial Intelligence, Choosing the Best Lawyer, Copyright Law, Internet Law, Litigation. Tagged with , , , , , , , , , , , , , .