On 24 July 2025, the European Commission released an explanatory notice and a standardised template that providers must use to publicly summarise the training content of general-purpose AI models. This marks a significant step in operationalising Article 53 of the EU AI Act.
For developers, downstream deployers, and particularly SMEs using or fine-tuning general-purpose AI (GPAI), the implications are immediate. While the obligation directly targets GPAI providers, the ripple effects extend to any organisation relying on their systems. Understanding what the template requires and how SMEs can engage with it is essential to staying compliant and competitive.
Why this matters now
The public summary requirement is one of the EU AI Act’s key transparency measures. Its aim is to offer meaningful insight into how GPAI systems are built, particularly in relation to the datasets used for training.
This requirement becomes enforceable from 2 August 2025 for new GPAI models placed on the market. Models already in use must comply by 2 August 2027. That means SMEs have limited time to understand the expectations, assess their exposure, and take steps to ensure they meet relevant obligations.
What is in the new EU template for training GPAI models?
The EU’s explanatory notice includes a detailed, structured template that all GPAI providers must complete and publish. It focuses on offering a high-level yet informative view of how AI models are trained.
The template is divided into three main sections:
Section 1 – Model description
This part includes basic details on the model, such as:
- Model name and version
- Intended general purposes (e.g. text generation, classification)
- Release date and major updates
- Whether it is made available under open-source or commercial terms
Section 2 – Training content overview
This section focuses on the datasets used and how they were gathered:
- General description of dataset types (text, image, video, code)
- Sources (e.g. web scraping, licensed content, user-generated data)
- Information on the geographic or linguistic focus
- Quality assurance and filtering techniques
- Whether personal data may be included
- Measures to address bias, discrimination, or copyright risk
The summary does not require exhaustive or proprietary details. However, providers are expected to be transparent enough to allow meaningful public scrutiny.
Section 3 – Methodological notes
Here, providers must explain:
- The model’s training methodology in non-technical language
- How datasets were pre-processed or normalised
- Any known limitations of the training data
- How training content aligns with the model’s intended use cases
All sections must be completed in clear, accessible language. The summary must be made available in English and at least one additional EU official language. A PDF version must be published on the provider’s website and submitted to the European Commission.
Are SMEs directly affected?
Most SMEs will not be the original developers of GPAI models. However, that does not mean they can ignore the transparency requirements. There are three key reasons why SMEs still need to act.
- SMEs using GPAI models must assess risk
Whether a company integrates ChatGPT, Claude, or open-source large language models into its services, it may be required to evaluate the underlying system. SMEs that build applications on top of GPAI must understand what data was used to train the core model.
Without that insight, organisations risk deploying tools that are biased, non-compliant with intellectual property rules, or based on datasets that include sensitive or personal information.
- SMEs may be expected to provide information to clients or authorities
Even when SMEs are not the provider, they are often the interface between AI systems and end-users. Clients or regulators may request information about how the AI model works. SMEs need to know where to find that information and how to interpret it.
- Downstream providers may be held responsible
In some cases, SMEs that fine-tune GPAI models or use them for high-risk applications (as defined by the AI Act) may themselves become subject to provider obligations. This includes the duty to produce a public summary or to ensure the original provider has done so.
Strategic benefits of engaging early
Even if the legal obligation does not apply directly, there are strategic advantages for SMEs in understanding and applying the new template as part of internal governance.
- It shows clients, partners, and investors that the company takes responsible AI use seriously
- It helps avoid reputational risks associated with opaque or controversial models
- It supports compliance with other EU regulations such as the GDPR and the Digital Services Act
- It prepares the organisation for future obligations as AI regulation expands
How SMEs can respond
A practical approach to transparency starts with internal mapping. SMEs should begin by identifying which GPAI models are used across the organisation and then evaluate what public information is available about those models.
Step-by-step approach
- List all AI tools and models in use, including plugins, APIs, and embedded systems
- Check whether the model provider has published a training content summary
- Evaluate the level of detail and whether it meets the EU’s expectations
- Document how the organisation uses each model and whether fine-tuning occurs
- Request additional information from vendors if needed
- Include this information in AI registers or documentation used internally
Where summaries are not yet available, SMEs should monitor updates from model providers and plan how to respond if requested to justify or explain their AI choices.
What about open-source models?
Many SMEs experiment with or deploy open-source models such as LLaMA, Mistral, or Falcon. These models may not come with comprehensive documentation. The template helps SMEs define what information should be requested or compiled internally.
Even without direct access to the training data, companies should still document:
- Why a specific model was chosen
- What the known limitations are
- Whether fine-tuning was performed
- How data governance policies address potential risks
Preparing for 2027 and beyond
While the immediate deadline affects only new models released after August 2025, SMEs should treat this period as a transition phase. The 2027 deadline for existing GPAI models is likely to prompt stricter enforcement and wider scrutiny of systems already in use.
Governments, public clients, and large corporate partners may begin requiring training content summaries as part of procurement or due diligence processes.
By taking steps now, SMEs can future-proof their use of AI tools and align themselves with emerging best practices across Europe.
Conclusion
The EU’s template for training content summaries sets a clear transparency benchmark for the AI industry. For SMEs, it offers a roadmap to assess the AI models they use, understand associated risks, and strengthen trust with clients and stakeholders.
Rather than waiting until enforcement becomes mandatory, small businesses can act now to integrate this guidance into their governance structures. Doing so not only prepares them for compliance but also builds a stronger foundation for innovation and accountability.
Learn more
Looking for support? Explore our AI Act compliance training for SMEs, try our free assessment tool to check if your organisation uses general-purpose AI models, or read our blog post on how to build an internal AI use register.
