Delhi AI Summit: Global AI Governance Divide Deepens

Delegates reviewing documents on global AI governance at the Delhi AI Summit in New Delhi.

At the AI Impact Summit 2026 in New Delhi, the debate over global AI governance moved from policy nuance to open disagreement. During the 18 to 19 February meeting, White House adviser Michael Kratsios stated that the United States “totally” rejects global governance of artificial intelligence. In contrast, the European Union endorsed the summit’s Leaders’ Declaration, reinforcing its commitment to multilateral AI rules.

The summit, hosted by the Government of India, concluded with the AI Impact Summit Declaration, also referred to as the Delhi Declaration. The document calls for international cooperation on trustworthy, inclusive and energy efficient AI systems.

This visible split signals a structural divergence in how leading economies approach global AI governance.

The AI Impact Summit and the Delhi Declaration

The AI Impact Summit 2026 in New Delhi gathered governments, regulators and industry representatives to discuss AI’s societal and economic effects. The outcome document outlines seven pillars, described as “chakras,” covering trustworthiness, access to compute resources, capacity building and sustainability.

Seven pillars of cooperation

The Delhi Declaration promotes cooperation on:

  • Trustworthy and responsible AI
  • Democratizing access to AI infrastructure
  • Energy efficient and sustainable AI
  • Talent development and research collaboration
  • Inclusive access for developing economies
  • Risk mitigation and safety
  • Public interest innovation

While the text does not create binding obligations, it frames AI as a shared global responsibility. In doing so, it reinforces the concept of global AI governance through coordinated standards and shared principles.

A non binding but strategic signal

Declarations of this kind are political instruments. They shape expectations, encourage regulatory convergence and influence future negotiations in forums such as the G20 or the United Nations. Even without enforcement mechanisms, they help define what responsible AI development should look like across jurisdictions.

The United States rejects global AI governance

In New Delhi, Michael Kratsios made clear that the United States opposes centralized global AI governance structures. The U.S. position reflects a preference for national oversight, sector specific regulation and market driven innovation.

Washington’s argument is that global governance risks creating bureaucratic layers that slow technological progress. U.S. policymakers often stress flexibility, voluntary standards and private sector leadership. This approach aligns with a broader strategy focused on competitiveness, exports and technological leadership.

The U.S. stance does not imply the absence of rules. Rather, it rejects the idea of supranational authority or binding multilateral control over AI systems. In practical terms, this means the United States is unlikely to support a treaty based regime for global AI governance in the near future.

The European Union doubles down on multilateral AI governance

The European Union, represented at the summit by Executive Vice President Henna Virkkunen of the European Commission, endorsed the Leaders’ Declaration. The EU reiterated its commitment to global AI governance grounded in democratic oversight, fundamental rights and legal certainty.

Building on existing EU legislation

The EU’s position is consistent with its regulatory trajectory. Recent measures such as the AI Act and the Data Act aim to create a harmonized internal framework for trustworthy AI and data use.

By supporting the Delhi Declaration, the EU signals that its internal model should also inform global AI governance discussions. European policymakers increasingly link AI rules to digital sovereignty, industrial competitiveness and talent attraction.

Partnerships beyond the transatlantic axis

The summit also highlighted the EU’s strategic outreach to middle powers, including India. By aligning with countries that support rule based cooperation, the EU seeks to shape emerging markets according to similar governance standards.

For Brussels, global AI governance is not only about risk control. It is also about creating predictable markets in which compliant providers can scale internationally.

Implications for EU SMEs and management

For EU SMEs, the divergence on global AI governance has practical consequences. The European market is likely to remain a regulated environment in which compliance, documentation and risk management are integral to AI deployment.

This can increase short term costs. However, it also creates a stable framework. Vendors serving European clients will design tools, contracts and support models around EU expectations. Over time, this can translate into higher quality and clearer liability allocation.

Moreover, if partnerships between the EU and countries endorsing the Delhi Declaration deepen, EU grade compliance may become a competitive asset in those markets. In that scenario, global AI governance aligned with European principles could shift from being a constraint to becoming a market differentiator.

Executives should therefore monitor both regulatory developments and geopolitical positioning. Strategic planning for AI adoption now requires attention not only to technology and cost, but also to the evolving architecture of global AI governance.

To prepare effectively, review how your current AI strategy aligns with emerging international governance models.