In Episode 173, Kelly Twigger discusses the imposition of sanctions and what those sanctions were following the submission of AI generated hallucinated citations in a brief to the Special Master on discovery issues in Lacey v. State Farm Gen. Ins.Co.
Introduction
Welcome to our Case of the Week segment of the Meet and Confer podcast. My name is Kelly Twigger. I am the Principal at ESI Attorneys, a law firm for ediscovery and information law, and the CEO and Founder at Minerva26, where we take the insights from our practice and provide a strategic command center for you to leverage the power of Electronically Stored Information (ESI). Thanks so much for joining me today.
Our Case of the Week segment is brought to you by Minerva26 in partnership with ACEDS. Each week on this segment, I choose a recent decision in ediscovery case law and talk about the practical considerations of that decision for counsel to apply in their practice and for other legal professionals to know about and understand as they’re engaging in ediscovery.
This week’s decision hits on an issue that we are seeing regularly and it is a tremendous cause for concern and a wake-up call for law firms — the submission of briefs to a court that contain hallucinated citations from Generative AI. This week’s decision came out a few days before another high profile story broke that I covered on LinkedIn, in which lawyers for Anthropic, the Generative AI company that brought us Claude AI, filed a declaration from an expert AT Anthropic that is alleged to have contained a cite to a non-existent academic article to bolster the company’s arguments over alleged misuse of lyrics to train its Claude AI tool. If you don’t already know, Anthropic is an AI company whose tagline is “through our daily research, policy work and product design, we aim to show what responsible AI development looks like in practice.” #irony.
In the Anthropic matter, the judge ordered the citing party to file a declaration and the other side has responded, but we don’t yet have a ruling from the court, so we’ll catch you up on that in a later episode.
Keeping that open issue in mind, let’s turn to this week’s case, in which the counsel for the plaintiff filed a brief on a privilege issue with the Special Master, who is Judge Michael Wilner — who served as a United States Magistrate Judge in the Central District of California for more than 13 years. The brief, as we’ll see, contained hallucinated citations and other problems stemming from using AI to create an outline for the initial brief.
This decision comes to us from the Lacey v State Farm General Insurance Company matter. This decision is dated May 6, 2025.
Facts
Let’s dive into the facts here.
The ruling that we are addressing here is the order of the Special Master imposing non-monetary sanctions and awarding costs. Also important to this analysis are the Special Master’s two earlier orders, which are dated April 15, 2025, and April 20, 2025. Those orders are included in the Appendix to this decision and I’ll include the facts from those as well for context. It’s important to note here that Judge Wilner took every step possible to protect the lawyers and firms at issue here in advance of this becoming public, and specifically precluded his previous orders on this issue not to be filed with the court until the issue had been resolved and he issued a final order. That final order is the one we’re discussing today.
Now let’s look at the specific facts that impacted the order before us.
The plaintiff, Mrs. Lacey, is represented by a large team of attorneys at two different law firms, Ellis George and K&L Gates. K&L Gates is involved because one of the lawyers from Ellis George moved to K&L Gates as this matter was pending. Judge Wilner was appointed Special Master in January 2025. At issue when he was appointed was a dispute between the parties regarding State Farm’s assertion of privilege in discovery. During the process of resolving that privilege dispute, Judge Wilner sought briefing from the parties on a discrete issue regarding a potential in camera review of some disputed documents. In camera means that documents are submitted to the Judge for review about the issue at hand, but they are not given to the other side. It’s basically a way for the Judge to see if the argument for withholding them is legit without them being disclosed to the other side until the issue is resolved.
An attorney at Ellis George used various AI tools to generate an outline for the issue on the supplemental brief. And, yes, that collective gasp that you just made is what everyone is doing at this point.
That outline contained citations that did not exist. The attorney then sent the outline to attorneys at K&L Gates, who incorporated material from the outline into the brief. According to declarations later filed, no attorney or staff member at either firm apparently cite checked or otherwise reviewed the research before filing the brief with the Special Master. It’s important to note that K&L Gates did not know that the lawyer from Ellis George used AI to prepare the outline and they didn’t ask him. Because really, why would you?
Upon receiving the plaintiff’s brief, Judge Wilner reviewed the cited case law and was unable to “confirm the accuracy” of two of the cited authorities. He then emailed the lawyers to have them address the issue. Later that same day, K&L Gates resubmitted the brief to the Special Master without the two incorrect citations but with remaining AI generated problems in the body of the text. An associate from K&L Gates, according to the Court, sent an “innocuous” email to the Special Master thanking him for catching the two errors that were “inadvertently included” in the brief and confirming that the issues had been addressed and updated in the revised brief. But Judge Wilner didn’t realize that the plaintiff’s counsel had used AI and resubmitted the brief with “considerably more made-up citations and quotations beyond the two initial errors” until he later issued an OSC soliciting a more detailed explanation.
In response to the Court’s order, plaintiff’s counsel provided sworn statements and the actual AI-generated outline that led to the false filings. Judge Wilner noted that the declarations also included profuse apologies and honest admissions of fault from the lawyers. In all, the 10-page brief from plaintiff’s counsel included cites to 27 authorities, nine of which were incorrect. Two of the cited decisions did not exist at all. Several quotations attributed to two specific judicial opinions were “phony” and did not accurately represent the language of the decisions. So, overall, a pretty enormous cluster in terms of a brief filing.
Following the review of the submissions on the AI issue, Judge Wilner advised the parties of the specific sanctions that he was considering and then held a hearing for the parties to address the issue.
Analysis
Let’s take a look at that analysis by the Court once the full scope of the issue was unveiled.
Following an analysis of his authority to impose sanctions under Rules 11 and 37 as a Special Master, Judge Wilner reviewed the language of those two rules.
Regarding Rule 11, he noted that it permits a court to impose a sanction to deter conduct by others and that Rule 37(a)(5)(B) requires a party who files an unsuccessful discovery motion to pay the party who opposed the motion its costs, including attorney’s fees. No surprise in either way for the language of either one of those rules — they’re pretty commonplace and we use them on motions to compel all the time, at least with regard to Rule 37.
Judge Wilner also noted the inherent authority of the court to impose sanctions for acting in bad faith and specifically cited Ninth Circuit case law on when that is appropriate. That’s never a good sign.
He then turned to the decisions supporting the imposition of sanctions for the improper use of AI in submissions to judges and noted that each one of those cases required a fact-specific analysis. Taking those decisions, their analysis and the facts here into account, Judge Wilner found that the lawyers here did act in bad faith for a number of reasons. The initial undisclosed use of AI to generate the outline was “flat-out wrong” and, according to the Court:
Even with recent advances, no reasonably competent attorney should out-source research and writing to this technology — particularly without any attempt to verify the accuracy of that material. And sending that material to other lawyers without disclosing its sketchy AI origins realistically put those professionals in harm’s way.
But the Court also found that K&L Gates’ lack of checking the validity of the research sent to them was troubling, and that their solution to fixing the problem was to remove the phony material and submit a revised brief that still contained multiple errors due to the use of AI. At that point, the lawyers were on notice of a significant problem with the research and failed to disclose the issue to the Special Master. Basically, what he’s saying is — hey, I emailed you, I told you there was a problem. Why didn’t you figure out the scope of the problem before you resubmitted the brief?
Instead of stepping up and disclosing the issue, the associate who emailed the Special Master suggested an inadvertent production error instead of what actually happened. Citing to the Cohen decision, Judge Wilner noted that K&L Gates had the chance to fix the problem and chose to double-down instead. Now there’s nothing in the information from the Court as to whether or not the associate actually knew that AI was the problem when they first fixed the brief, and so we’d have to read the declaration specifically on that point to note whether, in fact, that happened.
As a result, the Special Master found that the initial undisclosed use of AI, the failure to cite check the brief, and the resubmission of the defective revised brief without adequate disclosure of the use of AI demonstrated reckless conduct with the improper purpose of trying to influence the Special Master’s analysis of the original issue.
As a result, those failures justified sanctions. The sanctions ordered by Judge Wilner included:
- striking the plaintiff’s supplemental briefs on the privilege issue;
- declining to award any discovery relief on the underlying privilege issue, which is a huge problem for plaintiffs; and
- monetary sanctions.
As to the monetary sanctions, Judge Wilner shifted the burden to pay $26,000 for his services in ferreting out this issue to the plaintiff’s two law firms jointly and severally. If you’re not familiar with joint and several liability, that means that they are both equally responsible for the $26,000. That was after the Court had required State Farm — the defendant — to pay for the Special Master. So, a shifting of those costs based on the bad faith found.
In the earlier decisions, before the Special Master was put in place, the District Court heard from the parties and the plaintiffs complained that the cost of a Special Master was out of bounds for them. It was just too expensive. They were a plaintiff bringing an insurance claim dispute. However, because the District Court had ordered State Farm to pay those costs and here, the Special Master found essentially that it was an abuse of his services to have this AI issue, he then shifted the cost back to the plaintiff to be able to pay that $26,000. So that’s a significant issue as well.
Judge Willner also ordered the plaintiffs to pay counsel an additional $5,000 for fees incurred for the original motion, and that was to be paid to defense counsel. And that’s important, because Judge Willner looked at the costs that defense counsel submitted which were more in the realm of $25,000, and said that he didn’t feel like requiring all of those costs was appropriate, so he only awarded $5,000 to defense counsel.
The lawyers did advise the Court that they had already informed their client — and that’s a good thing. As a result, Judge Wilner noted that Mrs. Lacey would not be financially responsible for the monetary sanctions, that those fell solely to the lawyers and their law firms.
This last point from Judge Wilner I think is really important, and it really goes to how we operate as lawyers and that we have this duty of competence that we need to constantly be considering, but that we will also make mistakes. Judge Wilner’s final thought here was to decline to order sanctions or penalties against the individual lawyers. He found that their admissions of responsibility were full, fair and sincere, and he accepted their apologies and noted that justice would not be served by piling on them for their mistakes.
What I love about Judge Wilner’s decision here is he really dug into what truly happened, what the actual facts of the scenario were that even after the lawyers submitted the revised brief, they came back, owned up to everything, told the Court exactly what had happened and apologized profusely. If you juxtapose that against other decisions we’ve seen in this AI generated context where we’ve got hallucinated citations, we’ve seen a lot of lawyers doubling down. In the original Mata case, we saw a lot of lawyers doubling down as to the fact that this information truly exists, when it’s pretty clear that it doesn’t.
Takeaways
What are our takeaways from today’s decision?
Well, the first one is a takeaway that many might view as fairly obvious. It’s no secret that, at this point, after dozens of cited cases, that using Generative AI tools for legal research is a terrible idea. Hallucinations abound. Please stop doing it.
This case raises a broader problem for law firms and supervising lawyers. What steps do firms and lawyers now have to take to ensure that the research they receive from associates or other attorneys is legitimate before they sign their names, and who will pay for those steps? What if attorneys start receiving hallucinated citations from lawyers in correspondence? We exchange correspondence all the time about discovery disputes. What is the obligation of the lawyer who receives that hallucinated citation to do with it? How are they supposed to respond? Further, what will clients need to require from their counsel in the way of process to ensure that this does not happen?
It’s no secret that the pressure to do research quickly and in a cost-effective manner is a real problem. First, it’s hard to do that. Systems aren’t set up to make research easy. Second, clients don’t like paying for research. Third, lawyers are typically very overworked, often struggling to meet deadlines.
None of those things justify submitting false legal authority to a judge or a special master. Ever.
But it’s a problem that the legal profession needs to face and, judging from the number of times that we’ve seen this issue come up since March of 2023 when ChatGPT was released, it’s not going away anytime soon. The law firm hierarchy has always been to have less expensive attorneys do research and then to leverage that research into written submissions for the court, often drafted by the same lawyers.
Today’s ruling may suggest that there needs to be an interim step to ensure that all research done is legitimate before signing your name to a brief with someone else’s research in it. Will courts start requiring a report filed together with a brief showing that all authorities are legitimately cited? Is that a solution to this issue? Do we even have technology right now that provides that? Will shepherdization tools work for that, perhaps? And how do clients get compensated for the damage to their reputation before a court when this happens? The Court here specifically noted that Mrs. Lacey had no responsibility for this, but we all know that human nature is to take those kinds of things into account in subsequent rulings.
We’ve talked about it many times before on Case of the Week — the positions and statements that you make to a court reflect on your client, and they impact the court and its future decisions on your credibility as counsel. These questions I raised are all very key questions that we do not have answers to. But one thing is very, very clear. Law firms have an obligation to ensure that their attorneys know and understand the potential implications of using AI — when and how to use it and all the ways in which it can go horribly wrong. Trust but verify.
This issue is only going to get bigger. Get your arms around it before it does.
Conclusion
That’s our Case of the Week for this week. We’ll be moving to doing our Case of the Week every other week to make room for other content on our newly branded Meet and Confer podcast, so be sure to tune in for our next episode, whether you’re watching us via our blog, YouTube, or downloading it as a podcast on your favorite podcast platform. You can also find back issues of Case of the Week on your favorite podcast platform and be sure to subscribe, as we’ll be adding new content apart from the Case of the Week segments.
As always, if you have suggestions for a case to be covered on the Case of the Week, drop me a line. If you’d like to receive the Case of the Week delivered directly to your inbox via our weekly newsletter, you can sign up on our blog. If you’re interested in doing a free trial of our case law and resource database, you can sign up to get started.
Thanks so much. Have a great week!