Legal Hallucinations
For a lawyer, an LLM looks like a dream: a tireless associate that can summarize 500-page depositions in seconds or draft a motion while you grab coffee. But when lawyers treat a probabilistic word-predictor like a legal database, they aren't just using the wrong tool; they are bringing a "hallucinating" witness to the stand.
Lawyers who treat an LLM like a licenced attorney are risking a high-stakes professional catastrophe.
In the legal world, the "right tool" for research is a verified database like Westlaw or LexisNexis. The "wrong tool" is a Generative AI model that prioritizes fluency over factuality.
For a busy attorney, an LLM feels like a superpower. You type: "Write a motion to dismiss based on New York statute of limitations for medical malpractice," and within seconds, you have a polished, professional-sounding document complete with citations. The catch though is LLMs do not "know" the law. They predict the next most likely word in a sentence. Because legal citations follow a very predictable format of "Name v. Name, [Volume] [Reporter] [Page]" the AI is incredibly good at inventing cases that sound real however don't exist.
According to recent reports from LawNext, the era of "I didn't know the AI was lying" is officially over. Judges are no longer amused; they are handing out sanctions, fines, and disciplinary referrals. Take the case of Ellis George LLP and K&L Gates LLP in California (2025). Lawyers used a cocktail of AI tools (including Google Gemini and CoCounsel) to generate a brief. They failed to verify the citations, and when a Special Master flagged the errors, they filed a "corrected" brief that still had multiple fake cases. The result? A $31,100 fine and a stinging rebuke for acting in "bad faith." In Toronto, the case of Ko v. Li saw a lawyer submit a factum with four non-existent cases. When questioned, the lawyer admitted she hadn't checked the citations. The judge's response was a masterclass in legal shade: "At its barest minimum, it is the lawyer's duty not to submit case authorities that do not exist."
The Risk of "Automated Misinformation" isn't just a few lazy lawyers. The OWASP GenAI Top 10 classifies this as LLM09:2025 Misinformation. When we rely on these models for critical information without a "human-in-the-loop" verification process, we aren't just making mistakes, we are polluting the information ecosystem. As Damien Charlotin's Hallucination Database shows, there are now hundreds of cases worldwide—from Argentina to Zimbabwe—where AI-generated "phantom cases" have surfaced in court. It's a global epidemic of using a creative writing tool to perform a forensic accounting of the law.
The Moral of the Story LLMs are incredible tools for brainstorming, formatting, and summarizing. They are the "right tool" for overcoming writer's block or drafting a polite email to a difficult client. However the moment you ask an LLM to tell you what the law is, you have picked up the wrong tool. In a profession where your reputation is built on the accuracy of your word, delegating your research to a machine that "hallucinates" for a living isn't just a shortcut; it's professional malpractice.
Sources: