What happens when your AI vendor’s governance commitments conflict with your operational needs? The Pentagon found out in February 2026, and the fallout has not stopped. For EU organisations that depend on the same providers, this is not a US defence story. It is an AI vendor governance case study with direct implications for every deployer managing supply-chain risk.
What Happened
Anthropic held a $200 million contract with the US Department of Defense, and Claude was the first commercial AI model deployed on classified military networks. In January 2026, Defense Secretary Pete Hegseth issued a directive requiring all Pentagon AI contracts to include “any lawful use” language within 180 days. Anthropic refused to comply on two points: it would not permit Claude to be used for mass domestic surveillance or for fully autonomous weapons systems.
On 27 February, Anthropic’s CEO Dario Amodei confirmed the company would not concede. The Pentagon terminated the contract and designated Anthropic a “supply chain risk”; a label normally reserved for foreign adversaries, barring any defence contractor from using Anthropic’s products. A federal judge temporarily blocked the blacklisting in late March, but the Pentagon has already moved on. Smaller AI firms are fielding calls from generals and combatant commanders, and the Department is actively integrating alternative models into classified environments.
The vendor relationship collapsed not because the product failed, but because the vendor’s governance posture became incompatible with the customer’s requirements.
The AI Vendor Governance Lesson
Most organisations treat AI procurement as a one-off evaluation. You assess the product, check the vendor’s policies, sign the contract and move on. The Anthropic case exposes why that approach is structurally insufficient. AI vendor governance is not a procurement exercise; it is an ongoing risk that requires monitoring, contractual safeguards and contingency planning.
A provider’s responsible-AI commitments can change at any point. Anthropic itself revised its Responsible Scaling Policy on the same day the Pentagon issued its ultimatum, replacing fixed safety guardrails with a more flexible framework. The company said the revision was unrelated to the Pentagon dispute. Regardless of cause, the effect is the same: a governance posture your organisation assessed at onboarding may no longer exist six months later.
Three Structural Risks to Assess
AI vendor governance risk does not arrive as a single event. It accumulates across three dimensions, each of which can develop independently and each of which most procurement frameworks fail to monitor after the contract is signed.
Policy Alignment Drift
Your AI vendor’s responsible-AI policy was part of your due diligence at procurement. But that policy is not static. Commercial pressure, political environments and customer demands can all force a provider to tighten, loosen or restructure its commitments. If the vendor’s AI governance posture shifts after you have embedded the product in your workflows, your own compliance framework is affected. For EU organisations operating under the AI Act, a vendor that weakens its safety commitments may introduce risks that your governance framework assumed were managed upstream.
Concentration Risk
The Pentagon’s dependency on a single AI provider became a strategic vulnerability the moment that relationship broke down. The same logic applies to any organisation that relies on one provider for its core AI capability. If your primary AI vendor exits your market, changes its terms of service or becomes subject to regulatory restrictions that limit functionality, your operations are exposed. AI vendor governance must include documented exit plans and assessed alternatives; not as a theoretical exercise, but as a maintained operational capability.
Jurisdictional Conflict
This is where the Anthropic case carries particular weight for EU deployers. The Pentagon is now requiring “any lawful use” terms from its AI vendors. The EU AI Act imposes specific obligations on providers and deployers, including restrictions on certain uses. Can a vendor credibly serve US government customers under “any lawful use” conditions while simultaneously meeting EU AI Act obligations that prohibit or restrict specific applications? If the answer is no, deployers face a provider that may eventually have to choose between markets, or one whose compliance posture becomes internally contradictory.
Practical AI Vendor Governance Controls
Start with structured due diligence at onboarding. Assess not only the vendor’s current policies but the governance structures that determine how those policies can change. Ask whether the vendor’s responsible-AI commitments are contractual obligations or voluntary positions, and whether you will be notified if they change.
Build regulatory-change clauses into your contracts. Your agreement should address what happens if the vendor’s regulatory classification changes, if its governance policies are revised or if a government customer imposes requirements that conflict with your own compliance framework. Without these clauses, you have no contractual basis to act when the vendor’s position shifts.
Conduct regular vendor governance reviews. AI vendor governance is not a one-time assessment. Schedule periodic reviews of your provider’s published policies, regulatory status and public statements. Treat a material change in any of these as a trigger for re-evaluation.
Maintain documented exit plans. Know which alternative providers can serve your use case, how long migration would take and what data portability provisions your current contract includes. The Pentagon learned this lesson when it found itself dependent on a single vendor with no immediate replacement ready.
The Risk You Are Not Monitoring
The Anthropic story is not about defence procurement. It is about what happens when responsible AI governance becomes a competitive liability for a vendor, and when a government customer’s demands force a provider to choose between its principles and its contracts. Every deployer faces a version of this risk. Your AI vendor’s governance posture can change, and when it does, your risk profile changes with it. If your governance framework does not include ongoing vendor monitoring, contractual safeguards and a tested exit plan, you are carrying a risk you have not sized.