Your AI governance framework accounts for the EU AI Act. It almost certainly does not account for the Digital Services Act. The European Commission is now assessing whether ChatGPT qualifies as a “very large online search engine” under the DSA. If the Commission proceeds with a formal DSA platform designation, the consequences reach beyond OpenAI. Every organisation that has embedded ChatGPT into its operations faces a changed risk profile; not because the use case changed, but because the vendor’s regulatory status did.
What the Commission Is Assessing
On 10 April 2026, Handelsblatt reported that ChatGPT is set to be classified under the DSA’s strictest tier. Commission spokesman Thomas Regnier confirmed that OpenAI had published user numbers above the 45 million monthly active user threshold that triggers designation. The reported figure is approximately 120.4 million average monthly EU users for ChatGPT’s search feature alone; nearly three times the threshold.
The legal question is not straightforward. The DSA was built around three categories of intermediary service: mere conduit, caching and hosting. A large language model does not sit neatly in any of them. The Commission is examining whether ChatGPT’s integrated search functionality brings it within scope as a Very Large Online Search Engine (VLOSE). The assessment hinges on whether a service that synthesises answers rather than returning indexed links qualifies under Article 3(j) of the DSA.
A precedent is already forming. In January 2026, the Commission designated Meta’s WhatsApp as a Very Large Online Platform but applied the obligations only to its public Channels feature, not to private messaging. A similar partial-scope approach could apply to ChatGPT: the search functionality designated, the broader chatbot excluded. Either way, a DSA platform designation for ChatGPT would be the first to cover an AI chatbot.
What a DSA Platform Designation Triggers
Services designated as Very Large Online Platforms or Very Large Online Search Engines face obligations that go well beyond standard platform rules. The additional requirements include annual systemic risk assessments covering threats to fundamental rights, electoral processes and public health. Providers must implement proportionate mitigation measures, submit to independent audits at least once per year, publish transparency reports on content moderation and algorithmic systems, offer recommender options not based on user profiling, and maintain a publicly accessible advertising repository.
Systemic Risk and Audit Obligations
The systemic risk assessment obligation under Article 34 of the DSA requires designated providers to identify and analyse risks stemming from the design, functioning and use of their services. This includes risks from algorithmic systems and from intentional manipulation. Providers must then have both the assessment and the mitigation measures independently audited. The first cycle of these audits across existing designated platforms has already exposed significant gaps in how providers interpret and report on systemic risks.
What This Means for ChatGPT
For a service like ChatGPT, DSA platform designation could mean restructuring how search results are generated, how content moderation applies to AI-produced answers and how user complaint mechanisms operate. OpenAI would also need to share data with the Commission and with vetted researchers, and pay an annual supervisory fee.
Why This Matters for Deployers
Most organisations treat their AI tools as procurement decisions. The AI Act reinforces this by assigning obligations based on the risk classification of the AI system itself. The DSA operates differently. It regulates the platform, and the platform’s regulatory status can change after you have already built your operations around it.
If ChatGPT receives formal DSA platform designation, OpenAI will need to restructure elements of how the service operates in the EU. That restructuring may affect API access terms, data processing arrangements, feature availability and the transparency obligations that flow through to business users. Your contracts probably do not account for this. Most standard SaaS agreements lack clauses addressing what happens when a vendor’s regulatory classification changes mid-term.
Platform Dependency as Governance Risk
This is the governance gap that few frameworks have addressed. Your organisation’s compliance posture is partly determined by decisions your vendor’s regulator makes. If your AI platform provider must conduct systemic risk assessments, those assessments may surface risks in your use case that the provider then restricts or modifies. You do not control the timeline, the scope or the outcome. Academic analysis of ChatGPT’s hybrid status under the DSA confirms that the regulatory overlap between the DSA and the AI Act creates new compliance questions for organisations that depend on these services.
Three Steps to Take Now
First, map your AI platform dependencies. Identify which services your organisation relies on, whether they operate in the EU and whether they are approaching or have exceeded the 45 million user threshold. ChatGPT is the current case, but Gemini, Copilot and other large-scale AI services may follow.
Second, review your contracts for regulatory-change clauses. Check whether your agreement with the platform provider addresses changes in the provider’s regulatory obligations, and whether those changes give either party the right to modify terms, restrict features or terminate access.
Third, monitor the DSA platform designation timeline. The Commission’s assessment is ongoing. A formal designation decision could arrive within months. Once issued, the provider has four months to comply with the additional obligations. That is a short window for dependent organisations to adjust their own governance arrangements.
The Regulation You Did Not Plan For
The EU AI Act is not the only regulation shaping how your organisation uses AI. The DSA adds a second layer; one that targets the platform rather than the AI system, and one that most internal governance frameworks have not yet absorbed. If your AI governance programme stops at the AI Act, it has a blind spot. Start closing it by mapping your platform exposure and reviewing your vendor agreements before the Commission’s decision forces the issue.