#CaseoftheWeekCase Law

Do You Waive Privilege by Using AI? Two Federal Courts Say It Depends

If you or your clients are using generative AI tools to analyze facts or draft litigation strategy, two new federal rulings show that privilege protection may turn on facts most litigators are not yet asking about.

By Kelly Twigger

Podcast | Transcript

In February 2026, two federal courts issued the first rulings addressing whether materials created using publicly available generative AI tools are protected by the attorney-client privilege or the work product doctrine. Both courts applied existing law. Both courts were correct under that law. And yet they reached different outcomes.

The lesson is not about AI hype. It is about how privilege doctrine operates when the drafting environment has fundamentally changed.

What the Courts Did

Let’s start with what actually happened.

On February 10, 2026, two federal judges confronted the same underlying issue: when someone uses a publicly available generative AI tool to analyze facts or draft content related to a case, is the product discoverable and does that use affect privilege? The facts at issue were very different.

Warner v. Gilbarco

In Warner v. Gilbarco, a pro se plaintiff used ChatGPT to assist with her internal thinking and drafting. After discovery closed, the defendant moved to compel production of her AI-related materials, arguing that any privilege was waived because she disclosed information to a third party — ChatGPT. United States Magistrate Judge Patti rejected that argument, characterizing the request as speculative and intrusive. He treated the AI interactions as internal mental impressions from the pro se litigant that were not discoverable. And even assuming they were otherwise discoverable, he held they were protected as work product.

But the sentence that will be quoted — and should be — is this: generative AI programs are “tools, not persons.” That framing is critical in examining privilege and waiver of that same privilege. If AI is a tool, then using it does not automatically create waiver any more than drafting in Word, emailing yourself notes, or using autofill does. Magistrate Judge Patti went further, warning that adopting the defendant’s waiver theory would effectively nullify work product protection in modern drafting environments.

At the same time, the opinion is narrow. Discovery had closed, the Court did not analyze the platform’s privacy policy as to confidentiality, and there’s an open question of whether the application of the work product doctrine was strictly because the plaintiff was pro se. The ruling leaves room for factual distinctions.

U.S. v. Heppner

Now contrast that with United States v. Heppner.

There, a criminal defendant used Claude to prepare reports outlining anticipated defense strategy after he knew he was under investigation. Those materials were seized when the FBI executed a search. The government sought a ruling that they were not privileged, and United States District Judge Rakoff agreed. He found that the defendant used Claude on his own initiative, rather than being directed by counsel to do so and that the platform’s privacy policy expressly stated that user content could be retained and disclosed and the defendant therefore lacked a reasonable expectation of confidentiality.

The Court also made clear that sharing those materials with counsel later does not transform non-privileged communications into privileged ones. That is black letter law.

In his bench ruling on February 10th, District Judge Rakoff noted that the AI should be treated like a “person” for privilege/waiver purposes. The written opinion ultimately grounded the analysis not in metaphysics, but in confidentiality and agency. Had counsel directed the use, the court suggested the analysis might have been different. Judge Rakoff’s ruling leaves us with the notion that counsel must request or oversee the AI searches to retain work product privilege.

Why the Analysis Matters

These cases are not really about ChatGPT or Claude; they are about how privilege doctrine functions in a world where generative AI is embedded in daily drafting. United States District Judge Xavier Rodriguez made the point at the University of Florida e-Discovery Conference that these decisions are being viewed through the lens of the law as it exists today. Under that lens, both rulings make sense.

Privilege requires confidentiality.
Work product protects mental impressions prepared in anticipation of litigation.
Waiver requires disclosure in a way that undermines confidentiality.

The friction comes from applying those principles to AI. If generative AI is treated as a third party for all purposes, then nearly every modern drafting environment is vulnerable as Judge Patti stated. Word and Google Docs now incorporate AI. Enterprise Co-pilots sit inside collaboration platforms. If interaction with AI equals disclosure to a third party, privilege doctrine destabilizes quickly.

Magistrate Judge Patti implicitly recognized that problem; District Judge Rakoff focused instead on the express confidentiality terms and the absence of counsel direction. But Neither opinion answers the harder questions:

  • Is AI-generated content “ESI” under Rule 26? Recent decisions in the Open AI matter suggest yes, but the claims there directly implicated the content in the copyright action.
  • Does it matter whether we are talking about prompts, outputs, or integrated drafting assistance? What about the context in which the inquires are made to AI?
  • Are publicly available AI tools materially different from enterprise-integrated tools? How does confidentiality affect the analysis, and what level of confidentiality do enterprise tools really provide?
  • What preservation obligations attach for both the user and the company that owns the AI tool?
  • Does the criminal context change the analysis? If so, why, and how do we analogize that to civil cases?
  • At what point does, if ever, AI function as an agent of counsel?

Those questions are not theoretical. They are operational and we don’t have answers to them to guide how we advise clients or make arguments in litigation. We need action in the form of rules and guidance from the courts. The challenge is that courts can only rule on the facts that they are presented with, and the issues those facts present. That’s exactly what we saw in Warner and Heppner and we are still left with many open questions.

Why This Is a Discovery Strategy Issue

Discovery is where privilege arguments become case-shaping events. If AI use is not part of your custodian interviews, your privilege analysis is incomplete. If you are not reviewing platform privacy policies, you are making assumptions about confidentiality that a court may not share. If you cannot articulate whether AI use occurred at counsel’s direction, you are leaving an avoidable factual gap. The exposure in Heppner was not abstract. The materials were seized. In Warner, the opposing party tried to compel production.

Now consider your own cases and ask yourself:

  • How many custodians are using generative AI tools today?
  • How many are using them without telling you? What are they using them for?
  • How many enterprise platforms have AI integrated on the back end?

This is not a future problem that requires forming a committee to look into it; it is a now problem that’s happening while you read this. And it’s scary. But slow down and think about it a bit, see what fits what your cases are about, then think about how to advise your clients. Harkening all the way back to Zubulake, courts do not require perfection. But they do require reasonable efforts.
Here are a few steps you can take.

Practice Guidance

Start with fundamentals.

1. Update your custodian interviews — Ask directly about generative AI use. What tool? For what purpose? Under whose direction? Who to talk to about retention in the organization if applicable?

2. Examine confidentiality terms — Read the privacy policy of any publicly available platform implicated. Courts will.

3. Clarify agency — If AI is used to assist with legal analysis, document whether it is done at counsel’s instruction, when and for what purpose.

4. Distinguish environments — Public-facing AI tools are not necessarily treated the same as enterprise-secured systems, but we don’t have clarity on that yet. Understand security and policies.

5. Address preservation — Determine whether AI interactions are retained and whether they fall within the scope of preservation obligations.

6. Educate clients —They need to understand that AI use may have privilege implications depending on how and where it occurs. Ensure they understand the current landscape and that managers are thinking in advance about how AI is used and under what authority.

Conclusion

Generative AI presents a new drafting environment. Courts are applying long-standing privilege doctrine to it without modification. So far, the outcomes turn on facts: confidentiality, direction of counsel, and context. The doctrine has not changed. The environment has. Litigators who recognize that distinction will navigate these issues deliberately. Those who do not will be reacting to rulings instead of anticipating them.

If this helped, share it so we can spread the message to everyone who needs to hear it. 



Categories
Archives
Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound