Meta has begun installing software on US-based employee computers to record mouse movements, keystrokes and occasional screenshots. Internally branded the Model Capability Initiative, the programme captures behavioural data. Its purpose is partly to optimise workflow and partly to train the AI agents Meta intends to ship as workplace tools. However, if the US labour context is stripped away, the scenario becomes universally recognisable. An employer deploys behavioural monitoring on staff devices for productivity purposes, while also using the same data for AI training. Inside the EU, that combination can move a productivity tool into algorithmic management territory. As a result, it triggers obligations most employers have not had to address systematically.
Article 88 GDPR makes one EU rollout impossible
Algorithmic management is not a single GDPR question. Article 88 GDPR allows member states to specify more protective rules for processing in the employment context. Even where a balancing test passes at general GDPR level, national law may impose additional consultation, transparency or hard limitation. The practitioner consequence is uncomfortable: there is no “one EU rollout” of an algorithmic management tool. There are as many rollouts as there are member states involved.
Three illustrative variations come up routinely. In Germany, the Federal Labour Court has read covert keystroke logging as inadmissible evidence, while works councils hold a co-determination right under section 87(1)(6) of the Works Constitution Act on technical surveillance. French law takes a different route: the CNIL consistently strikes down monitoring that lacks proportionality and clear prior information, and the Labour Code requires consultation with the social and economic committee before introducing tools that monitor staff. Italy adds a stricter procedural layer, because Article 4 of the Workers’ Statute requires either a union agreement or labour-inspectorate authorisation before remote monitoring. The Netherlands has a lighter regime, but the Works Councils Act still gives the works council a significant role. The differences are decisional, not cosmetic.
Where Article 22 bites algorithmic management tools sold as observational
The standard vendor pitch describes workplace monitoring as observational: data collection without decisions. Article 22 GDPR does not care about that framing. Where the system’s outputs influence task allocation, performance reviews, promotion candidacy or training assignments, the processing produces decisions with “legal or significant effects” on the individual and Article 22 protections apply. The deployer carries the obligation to identify when the crossover occurs.
In practice, most algorithmic management procurement falls short at this point. First, the buyer accepts the vendor’s framing of the tool as a productivity dashboard. Then, the HR team quietly uses the same outputs in calibration meetings or performance review cycles. At that stage, the Article 22 trigger may already have been pulled, but no one has documented it. Consequently, when a complaint arrives, the deployer has no written record of the analysis that should have been completed before deployment.
Algorithmic management governance starts with that written record, not with vendor reassurance.
Works councils: consultation is not the same as agreement
Many member states require prior involvement of works councils before introducing new algorithmic management tools. The deployer failure mode clusters at the seam between consultation and agreement. Consultation means the works council is informed in good time, has access to the relevant documentation and can issue an opinion. Agreement means the works council has co-determination rights, and the tool cannot be deployed without their consent.
Germany’s Works Constitution Act sits on the agreement end of the scale for technical surveillance. France and the Netherlands sit closer to consultation. Treating these as interchangeable will produce procedural challenges that survive any later substantive defence on the GDPR side.
The AI Act layer arrives in three months
From 2 August 2026, the EU AI Act adds a third layer to algorithmic management deployments. AI systems used to monitor or evaluate workers fall inside Annex III as high-risk, which triggers the Article 26 deployer obligations. Most directly, Article 26(7) requires the employer to inform workers’ representatives and affected workers, before the system is put into service, that they will be subject to a high-risk AI system. In addition, Article 26(11) requires individuals to be informed when a high-risk system is making or assisting decisions about them. Article 50 transparency obligations enter into application on the same date and add a disclosure layer for systems with chatbot-style interaction or emotion recognition features. Therefore, practitioners should confirm with their HR and IT teams which monitoring tools fall within Annex III scope, and which add the Article 50 layer on top.
Three controls to add to the algorithmic management deployment plan
Three controls belong in every algorithmic management deployment plan, not in the post-incident review.
Treat each member state as a compliance unit
Map the rollout against Article 88 national variations before contract signature. The output is a country-by-country deployment matrix listing the consultation requirement, the works council role and the local data-protection authority guidance for each jurisdiction. If the matrix is not written down, the rollout is not designed yet.
Document the Article 22 trigger analysis in writing
Before the tool goes live, the DPIA records exactly which outputs feed which HR decisions. The analysis identifies when observational data becomes decisional input. It specifies the human-oversight, contestation and review mechanisms that apply when it does.
Treat works council cadence as a gating dependency
Add the consultation or agreement timeline into the deployment plan as a hard dependency, not a parallel workstream. Technical implementation cannot begin before that obligation is closed.
Close
Algorithmic management is now a deployment discipline, not a vendor category. Meta’s resources mean nothing if the tool tips into Article 22 territory without the documentation behind it, and the same logic applies to the next workforce productivity pilot inside any EU subsidiary. The Workforce AI Monitoring Decision Sheet runs through the questions that belong in front of the next sign-off.