|
||
| Back to Columns | Print Column | ||
|
State: Ntl. Kamin: AI Citations Present Cautionary Tale for Attorneys: [2026-01-20] |
||
|
|
||
|
If the past few years have taught attorneys one thing about artificial intelligence, it’s that they should not trust AI for case citations.
John P. Kamin In December 2025, a friendly neighborhood judge asked if I had heard about the latest decision imposing sanctions for faulty citations. To be honest, I wasn’t quite sure about this most recent case, only because there were so many in recent years. For example, the California Applicants' Attorneys Association wrote about one such case in September. And then WorkCompAcademy wrote about a similar, but different, case in December. Long before those articles, a 2023 court sanctioned New York attorneys for using as many as six imaginary case citations, causing state bar associations to take notice. To be clear, the judge I spoke to was not referring to any of these cases; he was talking about a newer case. Why? Why is this happening? AI-powered engines occasionally make things up out of thin air, and when they do that, those are known as “hallucinations.” Attorneys rushing to prepare pleadings are simply going to AI chatbots for case citations and legal summaries, then "copypasta-ing" that right into their trial briefs, petitions and appellate briefs. (Copypasta is when someone bombs a message board or social media app with a block of copied and pasted text, with that text often being ridiculous in nature.) The judges who are assigned to these attorneys’ cases, and their clerks, must attempt to vet these citations. (Interestingly enough, these folks are now similarly situated as high school and college teachers who must determine whether the work before them is real or simply plagiarized garbage.) When the judges cannot find those citations anywhere and turn around and ask the attorneys for those cases, those attorneys are incapable of producing them because they simply do not exist. Yikes! That’s not a good spot to be in. How to avoid this problem There’s a simple way of avoiding this problem, whether you’re using AI, Google or your favorite legal treatise to do your legal research: When you see a citation, pull up the specific case and actually read it. LexisNexis should have most of those cases, and if that’s too expensive or difficult to find, many of these cases can be located online, too. By reading the actual opinion, you may find nuances that either support or undermine your argument. The last time I read a case that an opponent cited, I saw that it handed the defendant a victory on my precise issue. As a result, I was happy to see opposing counsel make that argument, and I was more than happy to point that out to our trial judge. (The cited case featured multiple nuanced sub-issues on the same topic, and opposing counsel thought it favored the plaintiff's side. However, a close reading showed that the defendant won on my sub-issue. One wouldn’t know that without reading the case, as the summaries didn’t mention my sub-issue.) Reading the cases also leads to another advantage: It allows you the opportunity to pull-quote language from the opinion. There’s no need to reinvent the wheel when a wise, savvy predecessor judge/commissioner/appellate justice said it better than anyone else could years ago. And of course, after quoting them, you can paraphrase it before and afterwards, just for dramatic effect. If you are quoting a prior opinion, we would just advise that you make sure you cite to it and give attribution where attribution is due. When I was a journalist and wasn’t quite sure how to summarize some complex legal analysis, I would often resort to using an exact quote because it was often the most accurate way to go. In summary, read the cases you’re citing, or at the very least, verify that they exist and generally stand for the proposition you’re citing to them for. There are a million ways it can help you. Not reading them can only hurt you. What not to do One tactic that has failed repeatedly when asked for an explanation about AI hallucinations is blaming the AI chatbot. Blaming a bot for work that an attorney is billing for has failed in every jurisdiction I have seen this fact pattern occur in. Judges view that explanation as simply lazy and sanctionable. In a scenario like that, one should take the least offensive path, even if it means eating a little AI crow. Conclusion Beware that AI chatbots may hallucinate false citations that could get you into a heap of trouble. That heap of trouble can damage your reputation. Don’t fall into that trap. Verify the cases you’re citing and read them. You’ll be more knowledgeable for it. John P. Kamin is a workers’ compensation defense attorney and partner at Bradford & Barthel’s Woodland Hills location. He is WorkCompCentral's former legal editor. This entry from Bradford & Barthel's blog appears with permission. |
||