The EU AI act in the public sector: A comprehensive guide for governments

6 min read
Dutch version not available

The EU AI act in the public sector: A comprehensive guide for governments

Artificial intelligence has evolved in less than a decade from pilot technology to a fundamental business tool for government. At the same time, citizens' trust in automated decision-making is fragile. The European Union wants to restore that trust through the Artificial Intelligence Act (AI Act), the world's first omnibus regulation for AI. For the public sector, the law is of particular importance: it belongs to both the largest users and the most strictly regulated categories.

This blog article shows which deadlines and milestones are already established, which additional obligations apply specifically to governments, where the greatest risks lie in applications such as social security, law enforcement and permit issuance, and how you as a public organization can prepare in time for enforcement and supervision.

From publication to enforcement: the timeline at a glance

The AI Act was published in the Official Journal on July 12, 2024 and formally entered into force on August 1, 2024. Instead of a "big bang" introduction, the law follows a phased application with crucial milestones for the public sector.

The implementation runs from August 2024 to August 2027, with each phase bringing specific obligations. Although the European Commission hinted on June 12, 2025 that there may be delays in issuing delegated acts, the formal calendar remains unchanged.

Fines and sanctions

The AI Act imposes heavier sanctions than the GDPR. Use of prohibited AI can be punished with a maximum of €35 million or 7% of global annual turnover; other violations can reach up to 3% and incorrect information up to 1% of turnover. Lower absolute maximums apply to small government corporations and semi-public institutions, but reputational damage is often decisive.

The risk-based classification for governments

The AI Act sorts systems into four classes: unacceptable, high, limited and minimal risk. The focus is on high-risk systems (Annex III). Particularly relevant for the public sector are:

  • Biometric identification (remote or post-event)
  • Critical infrastructure (traffic management, energy)
  • Education and examinations
  • Employment and personnel selection
  • Essential government and banking services (benefits, credits)
  • Law enforcement, migration and justice

For each high-risk system, a certified risk management process, data governance requirements, extensive technical documentation and CE marking are required before it may be put into use.

Prohibited applications and strict exceptions

Since February 2, 2025, any system is prohibited that:

  • ranks citizens through social scoring
  • performs indiscriminate facial recognition training (scraping from internet or CCTV)
  • deploys emotion recognition at school or work
  • uses sensitive biometrics to infer race or sexual orientation

Real-time facial recognition in public spaces by police is prohibited in principle, but is allowed in a limited number of scenarios — for example, searching for a missing child or preventing a terrorist attack — after judicial authorization.

Specific obligations for the public sector

Deployers versus providers

A government institution can be both a provider (it develops an algorithm itself) and a deployer (it buys or rents an external system). The AI Act assigns unique duties to both roles; it is essential to determine in advance which role you fulfill in each use case.

Fundamental Rights Impact Assessment (FRIA)

For any deployment of a high-risk system, the government must conduct a FRIA before putting it into use (art. 27). The FRIA describes purpose, target group, potential violations of fundamental rights, human oversight layers and remedial measures. Summaries of the FRIA must be published.

Registration in the EU database

Both providers and deployers of high-risk systems must register their system in the central EU database (art. 49/71) and keep the data current.

Quality management and conformity assessment

For high-risk systems, an auditable Quality Management System (QMS) is mandatory, plus an internal or external conformity assessment (art. 17, 43). Only with 'significant modification' must a system be reassessed, which is relevant for continuous machine learning updates.

Model contract clauses for procurement

The European Community of Practice published Model Contractual Clauses (MCC-AI) in early 2025: a High-Risk and a Light version. Public procurers can use these to contractually enforce AI requirements (audit rights, QMS, data for the AI register, etc.).

Note: research shows that many government organizations struggle with the complexity of AI procurement and expertise scarcity. National guidance is often still lacking.

Competent authorities and AI sandboxes

Each EU country must have designated by August 2, 2025 one or more competent AI supervisors (art. 70) and have a national AI sandbox operational from August 2, 2026 (art. 57).

Public AI use cases in 2025

  1. Benefits and fraud detection — scoring models for allowances and assistance
  2. Smart mobility — traffic lights that optimize flow with computer vision
  3. Chatbots for citizen services — automation of permit applications
  4. Predictive policing — hotspot analysis of burglary or nuisance
  5. Medical triage in public hospitals

In virtually all these domains, the systems qualify as high-risk or are even prohibited due to profiling.

Risks and points of attention

The main risks for government organizations are discrimination (fraud detection that disproportionately often marks vulnerable groups as "high risk"), unlawful surveillance (real-time camera analytics without legal basis), lack of explainability (citizen receives no comprehensible explanation for permit rejection), cybersecurity and data breaches (model parameters leak and provide insight into facial templates), and vendor lock-in (supplier refuses to share compliance documentation).

Integration with existing EU rules

The AI Act comes on top of the GDPR, the Data Governance Act, the Data Act and the Digital Services Act. Think for example of:

  • DPIA ↔ FRIA: a DPIA under art. 35 GDPR remains necessary; the summary becomes part of the FRIA file
  • Data Act: mandatory interoperability affects the definition of "significant modification" in model updates

Roadmap to compliance (2025-2027)

  1. Inventory all existing and planned AI applications and link each system to a risk category
  2. Determine the role (provider vs deployer) and establish which obligations apply
  3. Conduct FRIAs and DPIAs in parallel for all high-risk systems
  4. Create contract templates (based on MCC-AI) and require CE marking or equivalent proof
  5. Build a QMS and register systems in the EU database
  6. Train employees in AI literacy, governance and audit skills
  7. Test innovations in the national AI sandbox to detect teething problems and compliance issues early
  8. Monitor continuously — logs, incident reporting and re-FRIA with every major model change

Conclusion

The EU AI Act sets a new standard for responsible use of AI in the public sector. The timeline is ambitious, the fines substantial, but the law also offers opportunities: more trust, better data quality and a level playing field. For governments that now invest in good governance, thorough impact assessments and robust procurement processes, August 2, 2026 need not be a date of fear, but rather the moment when digital service delivery becomes demonstrably fairer, safer and more human-centered.

In short: start inventorying today, involve lawyers and data scientists, and deliberately set the bar high. Then in 2027 you will not only be in compliance but especially in control of the AI future of your public organization.

Need help with the EU AI act? 💡

Want to know what the EU AI act means for your government organization? Or do you have questions about implementation in the public sector? Contact us for a free consultation.