News and Insights

Practical updates on AI governance, workforce strategy, and digital resilience for EU organisations

Stay ahead of AI change. Get practical updates from Future Prep direct to your inbox.

By subscribing you agree to receive Future Prep updates. Unsubscribe any time.

Latest Insights

From the Future Prep blog

Mechanical typewriter on a wooden desk with wires running from each key into a hidden cable below, illustrating algorithmic management and worker monitoring through keystroke capture.

Algorithmic Management at Scale

Meta's Model Capability Initiative tips workplace monitoring into algorithmic management territory. For EU employers, that means Article 88 GDPR, Article 22, works council consultation and the AI Act's Article 26(7) all apply at once. Three controls to add before the next workforce monitoring pilot goes live.
Glass distillation apparatus in a workshop with a row of derivative receiving vessels behind it, illustrating model provenance and the difference between an original AI system and its distilled copies.

Model Provenance: 3 Critical Vendor Questions

A US State Department cable on Chinese AI distillation has turned model provenance into an immediate vendor due diligence question for EU deployers. Three concrete asks for every supplier this quarter, and where the EU AI Act helps but does not finish the job.
Industrial scale weighing EU-flagged servers against an unmarked unit illustrating the cloud sovereignty framework procurement assessment

SEAL-2, SEAL-3 and the Architecture of the EU’s €180 Million Sovereignty Bet

The European Commission awarded €180 million in sovereign cloud contracts assessed against its new Cloud Sovereignty Framework and SEAL levels. A working reference for procurement teams.
Postcard and sealed envelope on a park bench illustrating AI acceptable use policy confidentiality gap

When AI Chats Become Evidence: What the GWG Holdings Ruling Means for Your Acceptable Use Policy

A US court ruled that AI chatbot interactions are not privileged and can be compelled as evidence. Most organisations' AI acceptable use policies are not ready for this. Here is what to fix.
Stone plaques showing Solvinity replaced by Kyndryl illustrating DigiD sovereignty risk

DigiD Under Foreign Law: The Sovereignty Risk Nobody Wants to Name

A supplier acquisition can quietly move national identity infrastructure under foreign jurisdiction. The DigiD sovereignty risk is a live example of how this happens, why EU law already rules it out and what every EU organisation running critical systems should check before the same logic applies to them.
AI vendor governance risk depicted as a ceramic tile fracturing under pressure in an industrial vice with an unopened spare beside it

What the Anthropic Case Reveals About AI Vendor Governance

The Anthropic-Pentagon fallout is not a defence story. It is an AI vendor governance case study that exposes three structural risks every EU deployer needs to assess and monitor.

Latest News

Short updates

AI Act: only eight Member States ready

Compliance teams are being asked to operationalise AI Act obligations against an enforcement infrastructure that is visibly incomplete. The Digital Omnibus adds a second layer of uncertainty: the rules themselves may still shift before August. The actionable question for practitioners is not whether to act but how to build governance structures that can flex if deadlines or obligations move.

“EU sovereign cloud”: a marketing label

Organisations in regulated sectors are making procurement decisions based on sovereignty claims that the current regulatory framework cannot yet verify. The gap between what hyperscalers call sovereign and what EU law actually requires is concrete and consequential; especially for organisations with US parent companies or Singapore branches where data jurisdiction matters.

Reuters made AI literacy mandatory

AI literacy is no longer a training budget aspiration. Article 4 of the AI Act requires providers and deployers to ensure sufficient AI literacy among staff and is already in force since February 2025. Reuters is a useful benchmark because it is a knowledge-work organisation, not a tech company, which makes the parallel directly relevant to legal, compliance, consulting and media teams.

AI Act: Five Months to Go

Five months is not much time if you have not started. SMEs deploying AI in HR, hiring, finance or safety-critical processes need to classify their tools now, not in July. The conformity assessment process – documentation, risk registers, human oversight protocols – takes longer than most teams expect. The good news is that the August deadline applies to new deployments; legacy systems in regulated products have until 2027. Start with an AI inventory. Everything else follows from that.

US AI Framework Targets State Patchwork

The White House’s March 20 National Policy Framework pushes a unified federal baseline for AI safety, child protection, intellectual property safeguards, and fraud prevention, explicitly aiming to preempt a confusing patchwork of state-level regulations and foster consistent national oversight. This move signals President Trump’s administration prioritizing streamlined governance over fragmented rules.

US States Advance AI Regulation Wave

New York proposes mandatory notices for generative AI outputs, Oregon targets oversight for AI “companion” apps handling sensitive interactions, and Utah strengthens rules on explicit deepfakes; running parallel to federal efforts and creating immediate compliance headaches for multi-state operators. These bills highlight growing bipartisan momentum at the state level, with enforcement teeth like civil penalties and injunctions already in committee drafts.

Scroll to Top