Who Owns What Your AI Produces?

The EU AI Act regulates AI inputs but says nothing about who owns the output. Most governance programmes have not addressed this gap. Here are four steps to close it.
AI-generated output ownership gap illustrated by a stamped document dissolving into blank paper on a desk

AI-Generated Output Ownership Is Unresolved

The EU AI Act regulates how AI systems are built. It does not regulate who owns what they produce. This is a problem for any organisation building an AI governance programme, because AI-generated output ownership sits in a legal gap that most compliance teams have not addressed.

The Act’s copyright provisions run entirely in one direction: inputs. GPAI providers must maintain a copyright compliance policy, respect opt-out rights under the DSM Directive and publish a training data summary using the EU AI Office template. These obligations have applied since August 2025. They tell providers what they owe to rightholders whose works were used in training. They say nothing about who owns the text, images, code or reports that come out the other end.

What EU Copyright Law Actually Requires

Existing EU copyright doctrine requires human authorship. A work must be the author’s own intellectual creation; the result of free and creative choices made by a natural person. Where no such contribution exists, the output falls outside copyright protection entirely. It belongs to nobody.

The first EU court to test this was the Municipal Court of Prague in 2024. A plaintiff had used DALL-E to generate an image from a simple prompt and published it on a website. When someone else copied the image, the plaintiff claimed copyright infringement. The court dismissed the claim. AI cannot be an author under Czech copyright law, which requires that a work be the unique result of a natural person’s creative activity. The prompt itself was merely an idea; ideas are not eligible for protection.

The Door Left Open

The Prague court did not slam the door entirely. It left open the possibility that a sufficiently creative, detailed prompt might support a human authorship claim. A simple instruction was not enough, but the court implied that substantial, demonstrable creative input could change the outcome. This tracks broadly with the US Copyright Office position, which has similarly rejected protection for AI-generated images while allowing that human-authored elements within a mixed work may qualify.

The UK takes a different path altogether. Section 9(3) of the Copyright, Designs and Patents Act 1988 grants authorship of computer-generated works to the person who made the arrangements necessary for their creation. Whether this provision will survive contact with modern generative AI is debatable; the UK Intellectual Property Office consulted on its potential repeal in 2024. But for now, it remains the only major jurisdiction that explicitly assigns AI-generated output ownership rights by statute.

Why This Gap Matters for Governance Programmes

Most organisations have structured their AI governance around two anchors: the EU AI Act and GDPR. That covers system classification, risk assessment, transparency obligations and data protection. It does not cover what happens when the AI produces something and nobody owns it.

The TDM Exception Framework

The connection between input-side obligations and output-side uncertainty runs through Articles 3 and 4 of the DSM Directive. Article 3 provides a mandatory text and data mining exception for research organisations and cultural heritage institutions. Article 4 extends the exception to everyone else, unless rightholders have expressly opted out. This is the mechanism the AI Act references when it requires GPAI providers to respect reservation of rights. Organisations deploying AI need to understand this framework; it governs whether the training that produced their output was lawful in the first place.

Contracts as the Primary Protection

If AI-generated output ownership cannot be secured through copyright, contracts become the only reliable allocation mechanism. The terms of service between your organisation and the AI provider determine what you can do with the output, whether you can claim exclusivity and what happens if a third party copies it. Equally, the terms between your organisation and your clients determine who bears the risk if delivered work turns out to be unprotectable. This is a governance action item; every AI-dependent workflow should have its contractual chain reviewed for IP risk allocation.

Trade Secrets as an Alternative Route

When copyright is uncertain, the EU Trade Secrets Directive (2016/943) offers a parallel track. It protects information that is secret, commercially valuable because of its secrecy and subject to reasonable steps to maintain that secrecy. The output itself may not qualify, but the process that produced it can. The curated prompts, the structured datasets, the workflow design and the selection criteria that shaped the final product are all protectable, provided the organisation treats them as confidential and documents the steps taken to keep them so. This reframes the question from “can I own this output?” to “can I protect what went into producing it?”

The GDPR Intersection

Personal data in training sets triggers data protection obligations regardless of copyright status. If AI-generated output contains or reflects personal data, GDPR applies to that output independently. This is a parallel risk lane that governance programmes must address alongside the ownership question, not instead of it.

Four Steps to Close the AI-Generated Output Ownership Gap

  1. Audit your contractual chain for every AI tool in operational use. Map the terms of service, licensing conditions and client-facing agreements against the specific question: who bears the IP risk if this output is unprotectable?
  2. Review your TDM exposure. Understand whether the AI tools you deploy rely on training data covered by Article 4 opt-outs and what that means for the legitimacy of the outputs you receive.
  3. Classify and protect your production processes under trade secret protocols. Document your prompt libraries, selection criteria and workflow designs as confidential business information.
  4. Flag personal data in AI workflows as an independent compliance requirement; do not assume that copyright coverage (or lack of it) resolves data protection obligations.

If your AI governance programme covers the AI Act and GDPR but not output ownership, it has a structural gap. Start closing it with a contractual and IP audit of your most AI-dependent workflows.

Newsletter
Releted Blogs
LATEST NEWS

AI governance is not a future problem

Regulation is already in effect. Your competitors are already building internal capability. The gap between ‘we are aware of AI’ and ‘we have operational control’ is closing, and it closes faster with a structured framework.

 

Book a 30-minute discovery call. No obligation. We will assess where your organisation stands and what a realistic starting point looks like.

No sales pressure. No jargon. Just a structured conversation about your organisation's AI readiness.

Scroll to Top