Responsible AI Platform

The EU AI Act: an in-depth analysis

··11 min read

Current status March 2026: The EU AI Act entered into force on August 1, 2024. Prohibited AI practices and AI literacy requirements have been enforceable since February 2, 2025. GPAI obligations apply since August 2, 2025. The final wave of requirements for high-risk AI systems takes full effect in August 2026, making this the decisive year for organizations to complete their compliance preparations.

In a world where artificial intelligence (AI) is becoming increasingly intertwined with our daily lives, from streaming service recommendations to hospital diagnoses, the call for regulation is growing. The European Union has responded by introducing the AI Act, the world's first comprehensive legislation for AI. In this blog, we dive deep into this groundbreaking law, examine its impact on organizations, and discuss how companies can prepare for this new regulation.

The birth of the EU AI Act

The EU has always been at the forefront of digital regulation, with laws such as the General Data Protection Regulation (GDPR) becoming global standards. With the AI Act, the EU is once again taking a significant step in shaping the digital future.

The law is the result of years of preparation, consultations with experts and stakeholders, and intensive negotiations between EU member states. The goal? To find a balance between fostering innovation and protecting the fundamental rights and safety of EU citizens.

The core of the AI Act: a risk-based approach

The risk-based approach is important because it ensures that AI applications are regulated proportionally, depending on the risk they pose. This allows less risky AI systems to continue innovating while imposing stricter requirements on higher-risk applications to protect fundamental rights and safety.

The heart of the AI Act is the risk-based approach. Instead of a one-size-fits-all approach, the law recognizes that different AI applications carry different levels of risk. Therefore, AI systems are classified into four risk categories:

1. Unacceptable risk

AI systems with unacceptable risks are prohibited. These systems are considered unacceptable because they pose a serious threat to fundamental rights, safety, and human dignity. They can lead to abuse, discrimination, or unfair influence of individuals without their consent. Examples include:

  • Social scoring by governments: As seen in some parts of the world.
  • Manipulative AI: Systems that manipulate human behavior and bypass free will.
  • Vulnerability exploitation: AI that exploits vulnerabilities of specific groups.

2. High risk

AI systems with high risk may be used but must comply with strict requirements. These requirements include risk management systems, robust data management practices, technical documentation, transparency to users, human oversight, and safeguards for accuracy and cybersecurity. Examples include:

  • Critical infrastructure: AI in transport and energy.
  • Education and employment: AI used in screening job applicants or assessing creditworthiness.

3. Limited risk

This category includes systems such as chatbots or deepfake technology. These systems are mainly subject to transparency obligations. Users must know they are interacting with AI.

4. Minimal risk

The vast majority of AI systems, such as AI-powered video games or spam filters, fall into this category. These can be freely developed and used, subject to existing legislation.

The four risk categories at a glance

The AI Act classifies AI systems into four tiers: unacceptable risk (prohibited outright), high risk (permitted with strict obligations), limited risk (transparency requirements only), and minimal risk (no additional obligations). This proportional approach means most AI systems face no new regulatory burden, while the strictest requirements are reserved for systems that can most seriously affect people's rights and safety.

Strict requirements for high-risk AI

For AI systems classified as high risk, the AI Act sets out a series of strict requirements. These requirements are aimed at effectively managing the risks such systems may pose and ensuring they are deployed safely and responsibly:

  • Risk Management System: Organizations must identify, evaluate, and mitigate risks throughout the entire lifecycle of the AI system.
  • Data Governance: There must be robust data management practices, including ensuring the quality and relevance of training data.
  • Technical Documentation: Detailed documentation about the system, including its purpose, architecture, and performance, must be maintained.
  • Transparency: Users must be clearly informed about the capabilities and limitations of the AI system.
  • Human Oversight: There must be meaningful human oversight to monitor the output of the AI system and intervene if necessary.
  • Accuracy and Robustness: The system must be sufficiently accurate, robust, and cybersecure.

Impact on organizations

The AI Act will have significant implications for organizations that develop or use AI. It imposes new obligations depending on the risk level of the AI application and forces companies to review their existing processes and strategies.

The introduction of the AI Act will have far-reaching consequences for organizations that develop or use AI. Here are some crucial points companies need to consider:

AI Inventory and Risk Classification: Organizations will need to map and classify all their AI systems according to the risk levels of the AI Act. This is an ongoing process as new AI systems are developed or existing ones are modified.

Compliance for High-Risk Systems: For high-risk AI systems, companies will need to invest significant resources to comply with legal requirements. This includes setting up risk management systems, improving data governance, preparing technical documentation, and implementing robust processes for human oversight.

Transparency Implementation: For limited-risk systems, companies must find ways to clearly inform users that they are interacting with AI. This may lead to adjustments in user interfaces and communication strategies.

Reconsideration of AI Strategies: Some AI applications previously considered promising may now be prohibited or economically unfeasible due to strict requirements. Organizations will need to reconsider their AI strategies in light of the new regulation.

Governance and Responsibility: Companies will need to create new roles and responsibilities to ensure compliance with the AI Act. This may include appointing an AI ethics officer or establishing an AI governance committee.

Documentation and Audit Trails: The emphasis on transparency and accountability in the AI Act means organizations must implement robust systems for documenting decisions and maintaining audit trails related to their AI systems.

LearnWize2 minutes, zero commitment

Learn the EU AI Act by doing

No slides. No boring e-learning. Try an interactive module.

Interactive ChallengePowered by LearnWize LearnWize

Try it yourself

3 interactive activities. Earn XP. See why this works better than reading slides.

LearnWizeTake the full test on LearnWize
FlashcardsMatchingAudit

Timelines and implementation of the EU AI Act

The AI Act will come into force gradually, with different deadlines for different aspects of the law. Here are the key dates and periods:

Legislative process

  • The legislative process began on April 21, 2021, with the legislative proposal from the European Commission.
  • The European Parliament and European Council then negotiated to reach a political agreement.

Entry into force and implementation period

  • Entry into Force: The EU AI Act entered into force in August 2024.
  • Member State Governance: Immediately after entry into force, member states must begin setting up their governance structures, including designating national supervisory authorities.
  • Unacceptable Risks: 6 months after entry into force (February 2025), provisions regarding unacceptable risks take effect.
  • General AI and Commission Guidelines: 12 months after entry into force (August 2025), rules for general AI systems take effect.
  • High-Risk AI and Safety Components: 24 months after entry into force (August 2026), provisions for high-risk AI systems fully take effect.

Special cases

  • Large-Scale IT Systems: For IT systems in the areas of freedom, security, and justice, an implementation period until 2026 applies.
  • AI Systems on the Market Before Entry into Force: For general AI systems, a transition period of 3 years applies; for high-risk AI, there is no specific end date, but significant changes fall under the new rules.

Key implementation timeline

The EU AI Act follows a phased rollout: prohibited practices since February 2025, GPAI obligations since August 2025, and full high-risk AI requirements from August 2026. Organizations should use the remaining months before August 2026 to complete their compliance programs, particularly for high-risk AI systems that require conformity assessments and registration in the EU database.

Enforcement and sanctions in the EU AI Act

The EU takes enforcement of the AI Act very seriously, with significant fines for non-compliance. Here is a detailed overview of the sanctions:

Prohibited AI practices (Article 5)

  • Fine: Up to 35 million euros or up to 7% of total global annual turnover.

Other infringements

  • Fine: Up to 15 million euros or up to 3% of total global annual turnover.

Misleading information

  • Fine: Up to 7.5 million euros or up to 1% of total global annual turnover.

SMEs and start-ups

  • Adjusted Arrangement: Fines are equal to the above percentages or amounts, whichever is lower.

EU institutions

  • Sanctions: Up to 1.5 million euros for prohibited AI practices; up to 750,000 euros for non-conformity.

Providers of general-purpose AI systems

  • Fine: Up to 15 million euros or up to 3% of annual total global turnover.

Factors in determining sanctions

The severity and duration of the infringement, the number of affected persons and the extent of damage, the degree of cooperation with authorities, previous infringements, and whether the infringement was intentional are all factors considered in determining sanctions. These factors help authorities better understand the severity of the situation and impose a proportional penalty. Serious, long-term infringements with significant damage will be punished more severely, especially if the organization does not cooperate with supervisors or if there are repeated or intentional violations. Conversely, good cooperation or taking measures to limit damage can have a mitigating effect on the final sanction.

Recommendations for organizations

Start Preparing Now: Don't wait until the last minute. For example, start by creating an inventory of all AI systems currently in use, including information about their functions, risk level, and any existing risk management measures. This helps to get a clear overview and set priorities for further actions.

Invest in Expertise: Consider hiring or training specialists who understand the technical and legal aspects of AI compliance.

Implement Robust Governance: Establish an AI ethics committee or appoint a responsible officer to oversee AI-related matters.

Improve Documentation Processes: Start setting up comprehensive documentation processes for your AI systems now.

Review Your Supplier Relationships: If you use third-party AI systems, ensure your suppliers are also preparing for compliance with the AI Act.

Stay Informed: Keep up to date with the latest developments around the law and accompanying guidelines.

View Compliance as an Opportunity: While the AI Act brings challenges, it also offers opportunities. Organizations that lead in responsible AI development can gain a competitive advantage.

Conclusion

The EU AI Act represents a crucial turning point in the regulation of artificial intelligence. While the law presents challenges for organizations, it also provides a framework for responsible AI development that can increase trust in this technology.

By acting proactively and embracing the principles of the AI Act, organizations can become compliant and leaders in the ethical and responsible use of AI. In a world where AI plays an increasingly important role, this ability to build trust and manage risks will be invaluable.

Frequently asked questions

⚖️ Referenced Legislation

On LearnWize:EU AI Act ComplianceTry it free

From risk classification to conformity assessment: learn it in 10 interactive modules.

Take the free AI challenge