When AI Chats Become Evidence: What the GWG Holdings Ruling Means for Your Acceptable Use Policy
In February 2026, US District Judge Jed Rakoff ordered Bradley Heppner, the former chairman of GWG Holdings, to hand over 31 documents he had generated using a consumer AI chatbot. Heppner had fed case-related material into Anthropic’s Claude to prepare reports for his defence lawyers. The court ruled that no attorney-client privilege could exist between a user and an AI platform. Prosecutors could discover every prompt and every output. This is the first US federal ruling on the question, and its logic extends well beyond courtrooms. Any EU organisation whose AI acceptable use policy does not address the confidentiality status of chatbot interactions should treat this as a direct warning. What your staff type into a public AI tool is not private, and a court may one day read it in the open.
Why This Ruling Matters for EU Organisations
The Heppner decision is US case law, but its consequences are not confined to US soil. EU organisations with US litigation exposure, US-based counterparties, or staff using US-hosted AI tools inherit the risk directly. If an employee enters commercially sensitive or legally protected information into a consumer chatbot, that content may become discoverable in US proceedings regardless of where the employee sits.
The GDPR adds a second layer of exposure. Under Article 30, organisations must maintain records of processing activities. If AI chatbot logs qualify as business records (and under the Heppner ruling they do), those logs fall within the scope of processing documentation obligations. Purpose limitation under Article 5 raises a further problem: an employee may enter data into a chatbot for one purpose, but the provider may retain, review or disclose that data for entirely different purposes. The EDPB’s 2024 opinion on AI models and personal data already required controllers to assess whether AI platforms process data lawfully. The Heppner ruling now shows what happens when that assessment is missing entirely.
There is a cross-jurisdictional wrinkle, too. If someone enters privileged legal strategy into a consumer AI platform and a US proceeding later forces disclosure, the privilege waiver may follow the content into EU-linked disputes. Organisations operating across both jurisdictions cannot assume that a waiver in one stays contained.
Three Failures Every AI Acceptable Use Policy Shares
The ruling did not expose an exotic risk. It exposed a gap that exists in most organisations’ AI acceptable use policy today. Three structural weaknesses are worth examining.
Prohibited Content Without Channel Classification
Most acceptable use policies list what employees may not enter into AI tools: personal data, proprietary code, customer records. Few address whether the channel itself is confidential. The Heppner court was unambiguous on this point. Consumer AI platforms are not confidential channels. Their terms of service permit data review, retention and disclosure to authorities. An AI acceptable use policy that restricts content types but ignores the confidentiality characteristics of the platform itself misses the central issue.
Training Focused on Prompts, Not on Records
Organisations that train staff on AI tend to focus on prompt quality: how to write better instructions, how to get more useful outputs. Almost none address the recordkeeping status of those prompts and the outputs they produce. Under the Heppner ruling, opposing counsel can compel every prompt and every output in discovery. If your training programme does not make this explicit, employees will continue to treat AI chats as ephemeral conversations. They are not. They are written records sitting on a third party’s infrastructure.
No Boundary Between Productivity and Strategy
The third failure is the absence of a line between using AI as a productivity tool and using it as a thinking partner for legal, financial or strategic questions. Heppner used Claude to synthesise defence strategy; most organisations’ AI policies do not distinguish between drafting an email summary and testing a negotiation position or preparing for regulatory scrutiny. The risk profile of those activities is entirely different, and the AI acceptable use policy should reflect that difference explicitly.
Six Updates to Make to Your AI Acceptable Use Policy
If your organisation’s AI acceptable use policy was last reviewed before the Heppner ruling, it needs updating. Six additions are worth making now.
A scope note on confidential topics should come first. Employees need a clear list of subject categories they must not enter into any consumer AI tool: legal matters, board-level strategy, M&A activity, ongoing disputes and regulatory correspondence.
Second, explicit channel guidance. The policy should name which AI tools are approved for sensitive work and confirm that consumer-grade platforms are not among them. Enterprise AI tools with contractual confidentiality protections may carry a different risk profile, but the organisation must document and assess that distinction rather than assume it.
Third, a retention and logging rule for AI interactions. The policy should state whether the organisation retains AI interactions internally, where it stores them, for how long and under what legal basis.
Fourth, a privilege awareness note for legal and compliance staff. Anyone working near privileged material needs to understand that inputting it into an AI tool may waive that privilege permanently.
Fifth, a third-party risk clause covering the AI provider’s own terms. If the provider’s terms of service allow data sharing, training on user inputs, or disclosure to authorities, that needs to appear in the policy as a documented risk. The EDPS guidance on generative AI already recommends that organisations assess provider terms before deployment; the Heppner ruling proves why.
Sixth, a training refresh trigger. Whenever a court ruling or regulatory opinion changes the risk landscape for AI use, the governance lead should update the policy and its associated training within a defined period rather than wait for the next annual review cycle.
What to Do This Week
Governance leads can act on three tasks immediately, in order of effort. Pull your current AI acceptable use policy and check whether it classifies chatbot platforms by confidentiality status. If it does not, draft a one-page interim directive restricting consumer AI use for legally sensitive, board-level or contractual content. Then schedule a briefing with your legal team to review whether existing AI vendor governance controls account for the discoverability of AI-generated records under both US and EU law.
The Heppner ruling is the latest signal that AI governance, procurement and legal risk are the same conversation. If your organisation’s cloud and AI vendor dependencies are not yet mapped, start there; the relationship between where your data sits and who can compel access to it is the thread that connects this ruling to the broader question of digital sovereignty.