US Court Rules: AI Conversations Are Not Protected by Attorney Client Privilege
Background
In United States v. Heppner, Bradley Heppner, a financial services executive and former chairman of GWG Holdings Inc., was facing criminal charges of conspiracy to commit securities fraud and wire fraud, making false statements to auditors, and falsifying corporate records. The indictment charged that Heppner defrauded GWG's investors out of more than $150 million.
After receiving a grand jury subpoena and engaging defence counsel, Heppner used the consumer version of Claude, without direction from his lawyers, to generate 31 documents outlining his potential defence strategy and legal exposure, drawing on confidential information his counsel had shared with him. He then transmitted those documents to his lawyers.
When Heppner was subsequently arrested and FBI agents executed a search warrant on his home, they seized the electronic devices containing those documents. His lawyers claimed attorney client privilege over them. The government moved to compel production.
Decision
Judge Jed Rakoff of the Southern District of New York disagreed with Heppner. He noted that:
"It is well established that the attorney-client privilege attaches to, and protects from disclosure, "communications (1) between a client and his or her attorney (2) that are intended to be, and in fact were, kept confidential ( 3) for the purpose of obtaining or providing legal advice." United States v. Mejia, 655 F.3d 126, 132" USA v Heppner 25 Cr. 503 (JSR), p4
Judge Rakoff rejected the privilege and work product claims on three separate grounds:
Ground 1 — Claude is not an attorney.
The AI documents in question were not created between Heppner and his attorney, nor did Heppner make any assertions that he believed Claude could operate as an attorney. All recognised privileges require a trusting human relationship with a licensed legal professional who owes fiduciary duties and an AI tool cannot form that relationship.
Ground 2 — No confidentiality.
Anthropic's terms of service explicitly reserve the right to disclose user data to third parties, including government and regulatory authorities, and to use inputs to train Claude. By agreeing to those terms, Heppner had no reasonable expectation that his conversations were confidential.
Ground 3 — Not prepared at the direction of counsel.
The work product doctrine protects materials created by or under the supervision of a lawyer in anticipation of litigation. Heppner created these documents on his own initiative, without instruction from his lawyers. Furthermore, Claude disclaims providing legal advice: When asked, Claude responded, "I'm not a lawyer and can't provide formal legal advice or recommendations" and recommends that a user "should consult with a qualified attorney who can properly assess your specific circumstances."
Judge Rakoff noted the outcome might have been different had Heppner's lawyers directed him to use Claude, in which case the tool might arguably have functioned in a manner similar to a highly trained professional working under legal supervision.
On retroactivity, the fact that Heppner later transmitted the documents to his lawyers could not cure the problem. Privilege must exist at the time of the communication, not be applied after the fact.
What it Means
If you use a consumer AI tool to think through a legal or compliance issue, even in preparation for a conversation with your lawyers, those documents may not be protected if litigation follows.
For organisations: any AI use in legally sensitive workflows, compliance reviews, HR investigations, contract analysis, should be done through enterprise-grade platforms, under the explicit direction of counsel, with appropriate confidentiality safeguards.
This ruling is fact specific and the law in this area is still evolving. But using a consumer AI tool for anything legally sensitive is now a demonstrated risk, not a theoretical one.
References:
- Your AI Conversations Are Not Privileged: What a New SDNY Ruling Means for Every Lawyer and Client, The National Law Review.
- S.D.N.Y. First-of-its-Kind Ruling: AI-Generated Documents Are Not Privileged, O'Melveny Worldwide
- United States v Bradley Heppner 25 Cr. 503 (JSR) Memorandum
AI Risk & Safe Use Training for Teams
To support organisations facing this exact challenge, I offer a 1‑hour live training session designed to give teams the clarity, confidence, and boundaries they need to use AI safely.
Teams will walk away with:
-
What is (and isn’t) safe to put into tools like ChatGPT, Gemini, or Claude

-
How to avoid common AI mistakes that lead to reputational risk or data breaches
-
How use of “free” or unapproved tools could expose the organisation
-
What settings to check, what tools to trust, and what safeguards leaders need
-
Real examples of AI failures across legal, government, and education sectors
It’s a plain‑language, sector‑neutral session for teams across admin, marketing, operations, sales, and more.
More information can be found on the website here - AI Risk & Safe Use Training for Teams
If you are ready to strengthen your AI governance, reduce compliance risks, and accelerate safe adoption, let’s talk.
📩 Email me directly at contact@senaconsulting.com.au
📅 Or book a free 20-minute discovery call here
About the Author
Hi 👋I’m Rikki Archibald, an AI Risk and Compliance Consultant and Founder of Sena Consulting.
I help organisations put the right frameworks, staff training, and internal policies in place so they can use AI safely and responsibly. With strong governance at the core, AI adoption becomes faster, smarter, and more sustainable, enabling you to innovate quickly, scale with confidence, and stay ahead of curve.
Through this newsletter, I share AI regulatory updates, global headlines, and case summaries, with clear takeaways for organisations that want to move fast with AI, without the unnecessary risk.
Subscribe to The AI Briefing for practical AI risk and compliance insights:
