AI Act 2026: The Missing Guidance SMEs Cannot Afford to Ignore

The European Commission missed its 2 February 2026 deadline to publish EU AI Act high-risk guidance. With August 2026 obligations still in force, EU SMEs cannot afford to wait for official clarity before acting.
Calendar showing missed February 2026 deadline for EU AI Act high-risk guidance with unopened document on desk

The Deadline the Commission Missed

The EU AI Act set a clear obligation: by 2 February 2026, the European Commission had to publish guidelines on the practical implementation of Article 6, including a template for post-market monitoring plans. That deadline passed without publication. The Commission indicated it was still integrating stakeholder feedback and aimed to release a revised draft later in February; final adoption may slip to spring at the earliest.

This is not a minor procedural slip. Article 6 is the classification engine of the AI Act. It determines whether a system counts as high-risk under Annex III, and therefore whether the full weight of documentation, conformity assessment, human oversight and monitoring requirements applies. Without that guidance, organisations deploying AI in hiring, credit scoring, biometric identification or other sensitive functions are left interpreting the raw legislative text six months before most obligations take effect.

What the Law Still Requires From August 2026

The guidance delay does not alter the August deadline. From 2 August 2026, the remainder of the AI Act applies, including obligations for high-risk Annex III systems, transparency duties under Article 50, and the requirement for at least one operational AI regulatory sandbox per Member State. These dates have not officially changed. The Commission’s Digital Omnibus proposal suggests linking enforcement of certain high-risk rules to the availability of harmonised standards, which standardisation bodies CEN and CENELEC are now targeting for late 2026; Parliament and Council appear broadly supportive of that shift. But no amendment is final, and planning on a delay that has not been legislated is a significant governance risk.

A Second Missed Deadline Compounds the Problem

The February guidance miss is not the first. Technical standards for AI were due from CEN and CENELEC in autumn 2025 and were not delivered; the revised target is now end of 2026. Several Member States have yet to designate national competent authorities. The pattern matters because it signals that implementation infrastructure across the board is running behind the legislative timeline. That compounds uncertainty for organisations that were counting on official guidance to anchor their compliance programmes.

What SMEs Should Do Now

For EU SMEs, the combination of a firm August 2026 application date and absent official guidance on AI Act high-risk classification means that waiting is not a strategy. Legal commentators advise proceeding on the basis of the Act’s text, existing sectoral guidance and available draft standards. The practical steps are straightforward, even if the regulatory picture is not.

The first priority is building an internal register of AI systems and classifying them by risk category using the Annex III criteria as they stand. That exercise surfaces where the documentation burden will fall and where human oversight mechanisms need to be in place. The second priority is budgeting for that work in 2026; technical modifications and conformity documentation take time, and leaving it until guidance appears could mean leaving it until August.

The Value of Sector Associations and Early Sandbox Engagement

The absence of Commission guidance increases the practical value of industry associations and sector-specific bodies. Many sectors have existing regulatory frameworks, such as financial services, medical devices and transport, that the AI Act layers on top of. Where sector-specific interpretation exists, it provides more reliable ground than waiting for general Commission guidance.

National sandboxes, once operational, will offer a structured route for testing compliance positions directly with supervisory authorities. Early engagement with sandbox frameworks, even before they are fully established, signals compliance intent and provides a channel for clarifying classification questions before enforcement begins.

Building Controls That Hold Under Uncertainty

The risk of building compliance programmes around specific guidance that then changes is real. The more durable approach is to invest in controls that serve regardless of how precise obligations are ultimately drawn: data governance, incident reporting processes, model documentation and human oversight structures. These are reusable assets. If standards tighten, they reduce re-work. If an organisation enters an AI supply chain with a larger partner or public sector buyer, they demonstrate credibility. Compliance built on solid governance infrastructure holds its value under regulatory change; compliance built on waiting for the right document does not.

Organisations that treat AI governance as a strategic function rather than a compliance checkbox are better placed to absorb the uncertainty that the current implementation gap has created.

If you are unsure how to classify your AI systems under the current Annex III criteria, start there. Explore the EU AI Act’s classification framework, or speak to sector counsel who can apply it to your specific use cases.

Newsletter
Releted Blogs
LATEST NEWS

AI governance is not a future problem

Regulation is already in effect. Your competitors are already building internal capability. The gap between ‘we are aware of AI’ and ‘we have operational control’ is closing, and it closes faster with a structured framework.

 

Book a 30-minute discovery call. No obligation. We will assess where your organisation stands and what a realistic starting point looks like.

No sales pressure. No jargon. Just a structured conversation about your organisation's AI readiness.

Scroll to Top