On May 13, 2025, the U.S. Federal Court in San Jose asked artificial intelligence company Anthropic to account for allegedly submitting legal documents containing false academic citations. The dispute stems from a music copyright lawsuit filed in San Jose federal court, in which Universal Music Group, Concord, and others allege that Anthropic used hundreds of copyrighted song lyrics to train its AI system, Claude, and that the system outputs the lyrics in their entirety.
        Court documents show that the American Statistician journal paper cited by Anthropic proved to be non-existent. Plaintiffs' attorneys verified that the paper had not been published in a journal, and the named authors confirmed that they had not authored the study. An expert witness, Olivia Chen, was accused of potentially using Claude to generate legal arguments, but there was no direct evidence of intent, and Anthropic argued that it was a citation error, suggesting that the misquote may have confused different papers.
    This incident raises concerns about the legal risks of AI-generated content.2023 A New York federal court has fined attorneys for citing fictitious ChatGPT jurisprudence, and there have been at least seven cases across the United States involving AI-generated false content in legal documents.