Responsible AI Platform

Prohibited AI systems under the EU AI Act

··7 min read

Current status March 2026: The prohibitions under Article 5 of the EU AI Act have been enforceable since February 2, 2025. The European Commission published draft guidelines on prohibited AI practices on February 4, 2025, with dozens of practical examples. Organizations that still operate prohibited AI systems face fines of up to 35 million euros or 7% of global annual turnover. The sanction regime has been in effect since August 2, 2025.

The European Union (EU) is known for its proactive approach to regulating emerging technologies, and artificial intelligence (AI) is no exception. The AI Act, which came into effect in August 2024, introduces a uniform legal framework for AI across the EU. The goal? To promote trustworthy, human-centric AI while minimizing risks to fundamental rights, safety, and public interests. A key aspect of this regulation is the explicit prohibition of certain AI systems considered to pose unacceptable risks. These prohibited systems are clearly defined to ensure that AI development in Europe aligns with fundamental rights and ethical principles.

Which AI systems are prohibited?

The AI Act prohibits several AI systems due to their potentially dangerous impact on individuals and society. Below, we list these prohibited systems:

1. Social scoring systems

AI systems that evaluate or classify the trustworthiness of natural persons based on their social behavior or personal characteristics are prohibited. This includes systems similar to China's social credit system, which can have far-reaching consequences for individuals' participation in society.

2. Manipulation and exploitation

Systems designed to manipulate human behavior, opinions, or decisions through subliminal techniques or by exploiting vulnerabilities of specific groups (such as children or people with disabilities) are banned. This protects people from unconscious influence and psychological manipulation.

3. Real-time biometric identification

The use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes is prohibited, with some strictly defined exceptions for specific security threats.

Narrow exceptions for biometric identification

While real-time biometric identification in public spaces is generally prohibited, the AI Act allows strictly limited exceptions: targeted searches for missing children, prevention of specific imminent terrorist threats, and locating suspects of serious criminal offences. These exceptions require prior judicial or independent administrative authorization and are subject to rigorous safeguards. The bar is deliberately set very high.

4. Emotion recognition in specific contexts

AI systems that attempt to detect emotions in workplaces or educational settings are prohibited. This technology lacks scientific basis and has great potential for misuse in situations where power relationships play a role.

5. Predictive policing based on profiling

The use of AI systems based solely on profiling for risk assessment of natural persons, such as predicting whether someone will commit a crime, is prohibited. This type of profiling can lead to discrimination and has a serious impact on individuals' fundamental rights.

6. Untargeted scraping of facial images

AI systems that create facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage are not allowed. Collecting facial images without consent poses a significant threat to privacy and can easily lead to misuse.

LearnWize2 minutes, zero commitment

Learn the EU AI Act by doing

No slides. No boring e-learning. Try an interactive module.

Interactive ChallengePowered by LearnWize LearnWize

Try it yourself

3 interactive activities. Earn XP. See why this works better than reading slides.

LearnWizeTake the full test on LearnWize
FlashcardsMatchingAudit

Exceptions to the prohibitions

While the AI Act is strict in its prohibitions, it does provide for carefully defined exceptions. For example, when the use of a prohibited AI system is strictly necessary for public safety, such as detecting missing children or preventing terrorist attacks. Such exceptions are only possible under strict safeguards and with prior approval from a judicial or independent administrative authority.

How to check if your AI system is prohibited

Organizations should conduct a thorough review of all AI systems against the six prohibition categories. Pay particular attention to systems that score or classify individuals, influence behavior (including through interface design), use biometric data, detect emotions in workplace or educational contexts, predict individual behavior for law enforcement, or collect facial images at scale. When in doubt, the European Commission's guidelines on prohibited AI practices provide detailed examples of what falls within and outside the prohibitions.

Consequences of non-compliance

Non-compliance with the prohibitions can result in severe sanctions: fines of up to 35 million euros or 7% of a company's total worldwide annual turnover. These high fines reflect the importance the EU places on trustworthy and ethical AI.

Conclusion

The prohibitions in the EU AI Act are essential to ensure trustworthy and ethical AI within the European Union. By prohibiting high-risk AI systems, the law aims to protect fundamental rights and freedoms of individuals and limit the risks of uncontrolled AI use. However, it is crucial that we remain alert to new developments and potential risks in the technology, ensuring the legal framework remains future-proof and effective. The AI Act is an important step in regulating AI, but continuous evaluation and adaptation will be necessary to keep pace with the rapid advancement of technology.

Frequently asked questions

⚖️ Referenced Legislation

On LearnWize:EU AI Act ComplianceTry it free

From risk classification to conformity assessment: learn it in 10 interactive modules.

Take the free AI challenge