Category: Regulations
AI regulations EU organisations need to follow, including enforcement updates, guidance, and compliance deadlines
Hidden AI Chatbot Data Leakage Risk
•
Reading AI Evaluation Reports: A Practitioner’s Filter for the New Procurement Reality
•
What Cloudflare’s 1,100 Layoffs Tell Us About AI Workforce Strategy
•
AI Act: only eight Member States ready
•
“EU sovereign cloud”: a marketing label
•
Reuters made AI literacy mandatory
•
AI Act: Five Months to Go
•
US AI Framework Targets State Patchwork
•
US States Advance AI Regulation Wave
•
Hidden AI Chatbot Data Leakage Risk
•
Reading AI Evaluation Reports: A Practitioner’s Filter for the New Procurement Reality
•
What Cloudflare’s 1,100 Layoffs Tell Us About AI Workforce Strategy
•
AI Act: only eight Member States ready
•
“EU sovereign cloud”: a marketing label
•
Reuters made AI literacy mandatory
•
AI Act: Five Months to Go
•
US AI Framework Targets State Patchwork
•
US States Advance AI Regulation Wave
•
Stay ahead of AI change. Get practical updates from Future Prep direct to your inbox.
By subscribing you agree to receive Future Prep updates. Unsubscribe any time.
The Digital Omnibus did not collapse on 28 April; it stopped. Until the next trilogue closes, the AI Act applies on its original schedule and 2 August 2026 is your planning deadline. Four scenarios, one quarterly plan, no parallel roadmaps.
The 28 April DMA review did not expand the law to cloud and AI. It narrowed enforcement onto two specific providers and one specific service category. The date worth pinning to your governance calendar is November 2026, not 28 April. Here is what changes for an EU mid-market stack.
A supplier acquisition can quietly move national identity infrastructure under foreign jurisdiction. The DigiD sovereignty risk is a live example of how this happens, why EU law already rules it out and what every EU organisation running critical systems should check before the same logic applies to them.
The Anthropic-Pentagon fallout is not a defence story. It is an AI vendor governance case study that exposes three structural risks every EU deployer needs to assess and monitor.
The European Commission is assessing whether ChatGPT qualifies for DSA platform designation. For deployers, this changes the risk profile of every AI tool built on a regulated platform.
The EU AI Act regulates AI inputs but says nothing about who owns the output. Most governance programmes have not addressed this gap. Here are four steps to close it.