Anthropic Bot Caught Making Up Court Evidence

Person holding AI icons and digital interface hologram

Anthropic’s AI hallucinated a legal citation in court, forcing lawyers to issue a humiliating apology and potentially undermining their entire defense against music publishers.

Key Takeaways

  • Anthropic’s legal team admitted Claude AI fabricated citation details in a federal court filing during a high-profile copyright lawsuit with major music publishers
  • The embarrassing error occurred in an expert witness report submitted to dismiss claims that Anthropic illegally used copyrighted song lyrics to train Claude
  • Judge Eumi K. Lee had previously dismissed most charges but allowed publishers to strengthen their claims, which they did in an April 25 amended complaint
  • The incident highlights growing concerns about AI reliability in legal settings and may influence future regulations on AI systems in professional contexts

AI Hallucination Creates Legal Nightmare

In a stunning development that has sent shockwaves through both the legal and AI communities, Anthropic found itself in an embarrassing predicament after its Claude AI hallucinated a legal citation in court documents. On May 15, 2025, Anthropic’s legal team was forced to admit that Claude had generated an incorrect citation in an expert witness report submitted to a California federal court during ongoing litigation with major music publishers. While the AI referenced a legitimate academic article, it completely fabricated the paper’s title and authors, creating a citation that doesn’t exist.

Anthropic’s attorneys from prestigious law firm Latham & Watkins described the error as “embarrassing and unintentional,” directly attributing it to an AI hallucination. The admission came in a critical motion to dismiss a lawsuit alleging Anthropic illegally used copyrighted song lyrics both to train Claude and in the AI’s outputs. This technical error has potentially serious implications, as it occurred in a high-stakes legal battle that could set precedents for how AI companies handle copyrighted material.

Publishers Strike Back Against AI Copyright Infringement

The underlying lawsuit represents a coordinated effort by music industry heavyweights Universal Music Group, Concord, and ABKCO, who accuse Anthropic of systematically using their protected lyrics without authorization. The publishers had already strengthened their position by filing an amended complaint on April 25, 2025, after Judge Eumi K. Lee dismissed most charges in March but allowed them to revise their claims. Anthropic responded with another dismissal motion on May 9, which contained the now-infamous hallucinated citation.

Matt Oppenheim, attorney for the music publishers, didn’t miss the opportunity to highlight the irony of the situation. “This is part of a disturbing pattern of AI misuse in legal proceedings,” Oppenheim stated after the hallucination was revealed. The publishers argue that Anthropic’s inability to control its own AI’s accuracy in a court filing undermines the company’s broader claims that it can responsibly manage copyright compliance in its training and outputs.

Anthropic Takes Responsibility While Defending Its Position

Ivana Dukanovic, representing Anthropic from Latham & Watkins, took full responsibility for the error, attempting to mitigate damage by explaining that while the expert had used valid research, the AI distorted the citation details. This admission highlights the complex relationship between human experts and AI tools in professional settings. Despite the embarrassing mishap, Anthropic continues to argue against strict liability for AI outputs in its legal filings, maintaining that occasional errors don’t constitute systematic copyright infringement.

“We deeply regret this error and take full responsibility,” Dukanovic wrote in the correction filing. “The underlying research referenced was legitimate, but the citation details were incorrectly generated by the AI system. We have implemented additional verification procedures to prevent similar occurrences.” The apology, while necessary, potentially weakens Anthropic’s position that its AI can reliably distinguish between fair use and copyright infringement.

Broader Implications for AI in Legal and Creative Industries

This incident has sparked renewed debate about AI reliability in high-stakes professional contexts. Legal experts warn that such errors could “mislead courts and harm clients,” highlighting the dangers of over-reliance on AI systems without proper verification. The hallucination demonstrates how even properly cited sources can be distorted by generative AI, creating “gaps” when training data is incomplete or improperly processed.

The controversy reflects wider tensions between AI companies and content creators across multiple industries. As Judge Lee scrutinizes both sides’ arguments, her rulings will likely influence how AI systems are regulated and deployed in legal and creative sectors. The case underscores the critical importance of human oversight when using AI tools in professional settings, particularly when legal consequences are at stake.

For conservative observers concerned about technological overreach and proper protection of intellectual property rights, this case represents a crucial test of whether AI companies will be held to traditional standards of accountability. The outcome could determine whether content creators maintain control over their work in an increasingly AI-dominated landscape, or whether tech companies can continue to use copyrighted materials with minimal consequences.

Sources:

Anthropic Claude Hallucination Apology – Digital Music News

Anthropic Apologizes After Claude Hallucination – Perplexity

Music Publishers vs. Anthropic: Ongoing Case – Digital Music News

Anthropic Lawyers Apologize to Court Over AI Hallucination – Music Business Worldwide

Anthropic vs. Music Publishers: A Legal Symphony Gone Off Key – OpenTools

Anthropic’s Lawyers Take Blame for AI Hallucination – Economic Times