Enforcement reality 2025: While penalty provisions have been formally in effect since August 2, 2025, practice shows a fragmented picture. Only 6 of 27 member states have designated their supervisory authorities, while organizations struggle with concrete compliance implementation.
The enforcement paradox of 2025
August 2, 2025 marked a turning point in EU AI Act implementation: penalty provisions became formally effective, with fines up to €35 million or 7% of global turnover for violations of prohibited AI practices. Yet reality shows a more complex picture than the legal text suggests.
The central paradox of enforcement in 2025 is that while legal instruments exist, the practical enforcement infrastructure is still under construction. This creates a unique situation where organizations must prepare for enforcement that can technically already take place, but whose form and intensity remain unclear.
State of affairs: national authorities in motion
The fragmented designation landscape
The obligation for member states to designate national authorities before August 2, 2025 has led to a patchwork of implementation strategies. Recent research by Clyde & Co shows that only six member states have designated their authorities: Denmark, Ireland, Latvia, Lithuania, Luxembourg, and Spain. The remaining 21 member states have not yet made announcements.
Implementation Status | Number of Member States | Typical Characteristics | Enforcement Impact |
---|---|---|---|
Implemented | 6 | Authorities designated | Enforcement possible |
Not implemented | 21 | No announcement yet | Enforcement impossible |
Three governance models emerging
From the six member states that have already designated their authorities, different governance models are emerging. Some member states like Spain choose newly established public authorities with dedicated AI expertise. The Spanish Artificial Intelligence Supervisory Agency illustrates this centralized approach that offers coherence but requires time for capacity building.
Other countries like Luxembourg designate existing authorities - in their case the National Commission for Data Protection - extending their mandate to AI supervision. This distributed approach is faster to implement because existing expertise and processes can be reused, but may lead to fragmentation between different supervisory domains.
Countries like Ireland and Lithuania have chosen a hybrid model where multiple authorities collaborate. Ireland has designated no fewer than eight different institutions, while Lithuania has the Innovation Agency and Communications Regulatory Authority working together. This approach combines elements of both approaches, with central coordination and sector-specific execution.
Dutch approach: pragmatic coordination
The Netherlands has chosen a hybrid model where the Dutch Data Protection Authority (AP) and the Dutch Digital Infrastructure Inspectorate (RDI) jointly shape AI Act supervision, supported by sector-specific supervisors. This approach combines existing expertise with new AI-specific capabilities.
The enforcement timeline: what applies when
August 2025: the first wave
Since August 2, 2025, penalty provisions apply to prohibited AI practices under Art. 5 with fines up to €35 million or 7% global turnover. Transparency obligations for certain AI systems and GPAI obligations for new models have also been in effect since that date.
However, crucial enforcement instruments are not yet active. Many investigatory and enforcement powers only take effect on August 2, 2026.
August 2026: full enforcement
Then it really gets serious. High-risk AI systems fall under full compliance requirements, while market surveillance authorities gain extensive investigatory powers. GPAI providers must comply with all systemic risk obligations from that moment, significantly increasing enforcement intensity.
Compliance gaps in practice
Gap 1: Documentation and transparency
The most pressing compliance gap concerns model documentation and transparency artifacts. Many organizations underestimate the administrative burden of continuous documentation updates with each model release.
Practical challenge: A Dutch fintech discovered their compliance team spent 40+ hours per month updating AI system documentation for just 8 production models. This doesn't scale for organizations with dozens of systems.
Solution: Automated documentation pipelines that automatically extract and format model metadata according to AI Act requirements.
Gap 2: Risk assessment and FRIA
Fundamental Rights Impact Assessments (FRIA) prove more complex in practice than expected. Organizations struggle with operationalizing abstract concepts like "human dignity" and "non-discrimination" into concrete technical controls.
FRIA in practice: A major recruitment platform spent 6 months on their first FRIA for a CV screening algorithm. The biggest challenge was not the legal analysis, but translating fundamental rights risks into concrete mitigation measures.
Gap 3: Human oversight implementation
Meaningful human oversight proves one of the most underestimated compliance requirements. Organizations often think a "human in the loop" is sufficient, but the AI Act requires that human intervention can actually be effective.
Common mistake: Dashboard solutions that show so many AI decisions simultaneously that human reviewers become overwhelmed and automatically accept without real assessment.
Enforcement priorities: where supervisors focus
Prohibited AI practices: low-hanging fruit
Supervisors initially focus on clear violations of Art. 5 prohibitions. Emotion recognition in workplaces and educational institutions, for example, represents low-hanging fruit because these practices are explicitly prohibited. Social scoring systems by government agencies and manipulative AI in consumer-facing applications also rank high on the priority list.
These cases are legally clear and enable precedent-setting without complex technical assessments.
Transparency: the enforcement compass
Lack of transparency often serves as an indicator for other compliance issues. Supervisors use documentation gaps as entry points for broader investigations.
Signals that trigger supervisors
Authorities watch for specific red flags: missing or outdated model documentation, inconsistencies between public summaries and actual use, lack of clear human oversight procedures, and vague or generic risk assessments without sector-specific considerations.
Sector-specific enforcement risks
Financial services: heightened attention
The financial services sector already knows intensive supervision and has experience with regulatory compliance. However, AI-specific requirements like bias monitoring and explainability require new capabilities.
Risk indicator: Use of AI for credit assessment without adequate fairness metrics and documentation of training data representativeness.
Healthcare: safety first
Healthcare AI often falls under high-risk classification, with strict requirements around clinical validation and post-market surveillance.
Enforcement focus: Medical AI devices without adequate clinical evidence or post-deployment monitoring of performance degradation.
Public sector: FRIA compliance
Government organizations have mandatory FRIA requirements and face additional societal pressure for transparent AI deployment.
Compliance challenge: Balancing operational efficiency with extensive fundamental rights documentation and stakeholder consultation.
Practical preparedness strategy
Phase 1: Immediate compliance audit (now - Q4 2025)
30-day enforcement readiness check
Week 1: Inventory all AI systems and classify according to AI Act categories. Focus on prohibited practices and high-risk systems requiring immediate compliance.
Week 2: Audit existing documentation against AI Act transparency requirements. Identify missing model documentation, risk assessments, and human oversight procedures.
Week 3: Evaluate your governance procedures against enforcement scenarios. Can you provide all relevant documentation to a supervisor within 48 hours?
Week 4: Develop a compliance improvement plan prioritized by enforcement risk and business impact.
Phase 2: Enforcement-proof documentation (Q1 2026)
A template-driven approach forms the foundation where organizations develop standardized templates for model documentation, risk assessments, and incident response that directly align with AI Act requirements. Automated compliance monitoring becomes crucial by implementing dashboards showing real-time compliance status and automatically generating alerts upon detection of non-compliance indicators. Legal response preparedness requires training compliance teams in handling regulatory inquiries and developing standard response procedures for supervisor contact.
Phase 3: Proactive governance (Q2-Q3 2026)
The third phase focuses on transforming compliance burden into competitive advantage. Organizations can use transparency as a differentiator by deploying superior documentation and explainability as selling points. Developing industry-leading practices that others use as benchmarks positions the organization as a thought leader in responsible AI. Meanwhile, demonstrable compliance leadership builds trust with customers, partners, and investors who increasingly value responsible AI implementation.
Enforcement-resistance strategies
The defensive layer: basic compliance
Minimum viable compliance begins with complete and current documentation for all AI systems within scope. This documentation must be accompanied by implemented human oversight procedures whose effectiveness is demonstrable. Risk assessment documents must be robust enough to withstand regulatory scrutiny, while incident response capabilities contain clear escalation procedures for different scenarios.
The strategic layer: compliance excellence
Advanced preparedness goes further with automated compliance monitoring and reporting systems that proactively identify risks. Predictive risk assessment capabilities help organizations prevent problems rather than just react to them. Stakeholder engagement programs for transparency create a culture of openness, while continuous improvement processes based on regulatory feedback ensure ongoing optimization of compliance processes.
Enforcement-resistant organization characteristics: Organizations well-prepared for enforcement share certain traits: they have proactive documentation habits with real-time updates, they maintain constructive relationships with relevant supervisors, they use compliance data for business intelligence and strategic decision making, and they invest in employee training on AI governance and regulatory requirements.
Supervisor relationships: cooperation over confrontation
Proactive engagement strategies
Sandbox participation: Use regulatory sandboxes to validate compliance approaches and build relationships with supervisors.
Industry consultation: Actively participate in industry consultations and regulatory guidance development to influence enforcement interpretation.
Voluntary disclosure: Consider proactive disclosure of compliance challenges and improvement plans to build goodwill.
Incident response best practices
When enforcement contact occurs, an immediate response team is crucial with designated legal and technical experts who can respond within 24 hours. Clear procedures for evidence preservation and privilege protection must be worked out in advance, as well as a communication strategy that ensures consistent messaging between legal, technical, and business stakeholders.
What organizations must do now
Immediate actions (this week)
Organizations must immediately begin with a thorough compliance audit whereby they map their AI Act compliance status within 48 hours. This means not only checking that all relevant AI systems have adequate documentation, but also evaluating whether teams know how to respond to regulatory inquiries. At the same time, it's crucial to assess whether the legal team has sufficient AI Act expertise, given the complexity and novelty of this regulation.
Strategic investments (Q4 2025 - Q1 2026)
For the period from Q4 2025 to Q1 2026, organizations must strategically invest in governance technology, specifically tools for automated compliance monitoring and reporting that reduce administrative burden. Additionally, compliance-by-design requires a redesign of AI development and deployment processes, so that compliance is not added afterward but built in from the beginning. Training programs for all teams working with AI become essential, as well as building external partnerships with legal experts, consultants, and industry peers for knowledge sharing and best practice exchange.
The enforcement reality of 2025
Enforcement preparedness in 2025 is not about perfect compliance from day one, but about demonstrable good faith efforts and continuous improvement capabilities. Supervisors recognize the complexity of AI Act implementation and appreciate organizations that are transparent about their challenges and show commitment to progressive compliance improvement.
Future perspective: enforcement evolution
2026 and beyond: mature enforcement
Expect enforcement to intensify significantly as supervisory capacity grows at national authorities expanding their teams and building expertise. Precedent cases will gradually create more clarity on interpretation of complex AI Act provisions, while the emergence of industry standards makes compliance expectations more concrete and actionable. At the same time, technology maturity ensures that enforcement tools become more effective in detecting non-compliance and automating supervisory processes.
Emerging enforcement trends
Three important trends are emerging in enforcement evolution. First, supervisors will apply risk-based prioritization focusing on highest-impact violations and repeat offenders, rather than random checks. Second, increased cross-border coordination between national authorities emerges for multinational enforcement actions, preventing large tech companies from escaping through jurisdiction shopping. Third, we see the development of industry-specific guidance where sectors like healthcare, finance, and automotive receive specific enforcement interpretations and expectations that align with their unique risk profiles.
Conclusion: preparedness as competitive advantage
The enforcement reality of 2025 shows that preparedness is more than regulatory compliance - it's a strategic capability that distinguishes organizations in an AI-driven economy.
Organizations that now invest in robust enforcement preparedness create not only regulatory resilience, but also position themselves as trusted AI providers in a market where trust becomes increasingly critical.
The question is not whether enforcement will intensify - that's inevitable. The question is whether your organization will be ready to turn that development from threat into opportunity.
Start today: Begin with an honest assessment of your compliance status, invest in fundamental documentation and governance processes, and build the relationships and capabilities that help you navigate the enforcement wave that's coming.
Organizations that act now will be market leaders in two years. Those who wait will be playing catch-up in an increasingly complex regulatory landscape.
This article is an initiative by geletterdheid.ai. We support organizations in developing enforcement-resistant AI governance capabilities that combine compliance and business value. For questions about enforcement preparedness in your organization, please contact us.