Picture the scene: the EU’s top competition official, Teresa Ribera, sits across from the CEOs of Google, Meta, OpenAI and Amazon in San Francisco, working through every layer of what she calls the “entire AI stack”: chatbots, training data and the cloud infrastructure that powers both. It is, by any measure, an unusual setting for a competition regulator. The question it raises for the companies that use these platforms is practical and immediate: what does this scrutiny mean for how you govern your AI suppliers?
What Ribera Is Actually Examining
Ribera’s March 2026 trip to the United States was not a goodwill tour. She has already opened several investigations into Google and Meta’s business practices and has made clear that competition enforcement is moving upstream from the products themselves to the infrastructure that produces them. Her focus is on whether dominant platforms are using control over training data, compute capacity and distribution channels to exclude rivals, a form of self-preferencing that the Digital Markets Act (DMA) already prohibits in adjacent contexts.
The scope is wider than previous EU investigations. Ribera told Berlin’s International Conference on Competition that her regulators are examining not just the final AI applications but also the underlying models, the data those models are trained on and the cloud infrastructure at their foundation. The Commission has also flagged concern about acqui-hires, the practice of licensing a startup’s technology and absorbing its team without triggering a formal merger review, as a mechanism for consolidating AI capability without regulatory scrutiny.
Alongside competition enforcement, the DMA review due in May 2026 includes a dedicated questionnaire on AI, signalling that the Commission is actively considering whether to extend gatekeeper obligations to AI services directly. For now, AI systems are not designated as core platform services under the DMA, but that may not remain true.
Why Third-Party AI Vendor Risk Is No Longer Just a Data Protection Question
Most organisations running an AI vendor risk assessment are focused on data protection, the EU AI Act deployer obligations and contractual liability. That is reasonable. What the Ribera meetings illustrate is that third-party AI vendor risk now has a further dimension: competition law.
If the Commission determines that a platform has been self-preferencing its AI services or unlawfully bundling AI capabilities with dominant platform access, the enforcement outcome could force structural changes to how that platform delivers its products. Organisations dependent on bundled AI services, a search API that includes generative features, a cloud contract that includes model access, a productivity suite with embedded AI, would need to adapt their workflows, renegotiate contracts or find alternative suppliers on short notice. That is an operational disruption, not a compliance notification.
The risk is concentration. An organisation that has built core processes around a single platform’s AI stack has effectively outsourced strategic capability to a supplier whose market position is under active legal challenge. The supplier may be forced to unbundle services, change data-sharing practices or divest specific capabilities as a condition of settlement. Your third-party AI vendor risk assessment should reflect that possibility.
The AI Act Adds a Further Wrinkle
Under Article 25 of the EU AI Act, deployers of high-risk AI systems who make substantial modifications to a system, or who change its intended purpose, can be reclassified as providers and assume full provider obligations. If your vendor restructures its product in response to competition enforcement, disaggregating a bundled service for instance, the contractual and technical terms governing your access may change in ways that affect your own compliance status. Governance programmes that treat the AI Act and competition law as separate regulatory tracks will miss this kind of downstream consequence.
3 Practical Actions for Governance Teams Managing Third-Party AI Vendor Risk
Add Competition Enforcement to Your Third-Party AI Risk Register
Vendor risk assessments typically cover data protection, cybersecurity and contractual liability. They rarely ask whether a supplier is under active antitrust investigation, or whether its market structure is likely to change under regulatory pressure. Add that question. For any AI supplier that accounts for significant operational dependence, note whether it is designated as a gatekeeper under the DMA, whether it is subject to ongoing Commission investigations and whether its AI services are currently bundled with platform access in a way that could change.
Review Vendor Contracts for Unbundling and Interoperability Scenarios
If a platform is required to unbundle AI services from its broader offering, or to provide interoperability with competing tools, your contract should specify what happens to your service level, pricing and data portability. Many current agreements do not. Review whether your AI vendor contracts contain provisions covering regulatory-driven service restructuring, and whether exit clauses and data portability rights are sufficient to allow you to migrate workloads if necessary. The EU AI Act’s requirements for written agreements between providers and third-party suppliers are a useful reference point for what contractual governance should cover.
Diversify AI Supply Chains Where Concentration Risk Is High
This is not an argument for using every platform on the market. It is an argument for audit. Where your organisation has built critical processes around a single vendor’s AI capabilities, document that dependency explicitly and assess what a forced migration would cost in time, money and operational continuity. For high-risk dependencies, consider whether a secondary supplier relationship or an open-model fallback is proportionate to the exposure.
The EU Is Regulating AI Through Multiple Instruments Simultaneously
The AI Act, GDPR, NIS2 and competition law are not independent tracks. They are overlapping instruments that can interact in ways that catch governance programmes off guard. A platform restructuring driven by DMA enforcement can affect your AI Act compliance status. A change to data-sharing practices forced by competition settlement can affect your GDPR position. As a European Parliament study on the interplay between the AI Act and the EU digital legislative framework documents, these instruments create overlapping obligations that governance teams must track together, not in isolation.
Ribera’s meetings in San Francisco were the latest signal that EU competition enforcement is moving into AI governance territory that most procurement and compliance teams have not yet mapped. The organisations that map it first will be better placed when the enforcement outcomes arrive.
If your AI governance programme does not yet cover third-party AI vendor risk in depth, the Future Prep AI Governance Prep Track includes structured frameworks for supplier due diligence, procurement governance and risk register design. Start there.