As the full implementation of the EU AI Act approaches, organizations face the challenge of understanding and correctly applying all different assessment requirements. This guide provides a clear, legally grounded analysis of all mandatory risk assessments.
The EU AI Act requires different types of risk assessments that are often confused with each other. Correct identification of which assessments apply to your AI system is crucial for compliance. Some systems require multiple parallel assessments by different actors in the value chain.
The European Artificial Intelligence Act (Regulation EU 2024/1689) introduces a comprehensive framework of risk assessments that organizations must perform to ensure compliance. These assessments range from fundamental rights impact assessments to technical conformity assessments, each with specific requirements and application areas. For compliance and AI professionals, it is essential to have a complete overview of all mandatory assessments and their interconnections.
Fundamental Rights Impact Assessment (FRIA) - the cornerstone of ethical AI implementation
The Fundamental Rights Impact Assessment, enshrined in Article 27 of the AI Act, represents one of the most substantial new obligations for certain deployers of high-risk AI systems. This assessment goes beyond traditional technical compliance and focuses specifically on the impact on fundamental rights of individuals and groups.
Scope and obligated parties
Article 27(1) determines that the FRIA obligation rests on specific categories of deployers. First, all deployers that are public bodies (bodies governed by public law) are required to conduct a FRIA before deploying a high-risk AI system. Second, this obligation also applies to private entities providing public services, where the definition of "public services" is elaborated in national implementing legislation of member states.
A special category concerns deployers of AI systems for credit assessment and insurance. Article 27 refers to Annex III, points 5(b) and 5(c), whereby systems for creditworthiness, credit scoring and risk assessment for life insurance and health insurance fall under the FRIA obligation. This extension to the financial sector underscores the broad impact foreseen by the legislator.
Article 27(2) of the AI Act determines that the FRIA must be updated when the deployer considers that relevant factors have changed or are no longer up-to-date, which underscores the dynamic nature of this assessment.
Content requirements and methodology
The FRIA must according to Article 27(2) include a systematic analysis of the specific risks that the AI system may pose to the rights of individuals or groups. This analysis must contain at least the following elements: a detailed description of the intended use of the AI system, including the processes and contexts in which it will be deployed, the duration and frequency of use, and identification of the categories of individuals or groups likely to be affected.
The risk assessment itself must evaluate what potential risks exist for individual rights and freedoms, with explicit reference to privacy, freedom of expression, and non-discrimination as core areas. Additionally, the FRIA must describe what human oversight measures have been implemented to mitigate these risks.
Timing and update requirements
Like a Data Protection Impact Assessment (DPIA) under the GDPR, a FRIA must be conducted before the high-risk AI system is used for the first time. Article 27(2) determines that the assessment must be updated when the deployer considers that relevant factors have changed or are no longer up-to-date. This continuous responsibility underscores the dynamic nature of AI risks.
Exceptions and temporary exemptions
Article 27(3) provides that in the case referred to in Article 46(1) (derogation from the conformity assessment procedure) deployers may be exempt from the obligation to notify the results of the FRIA to the market surveillance authority. The FRIA itself remains mandatory and must be carried out "without undue delay"; the exception concerns notification only, not the assessment.
Conformity assessment - technical compliance for high-risk systems
The conformity assessment forms the heart of technical compliance under the AI Act and is regulated in Articles 43 to 48. This assessment focuses on the technical aspects of AI systems and their conformity with the established requirements, in contrast to the FRIA which concentrates on fundamental rights.
Procedures and modularity
Article 43 defines conformity assessment as the process whereby it is demonstrated that a high-risk AI system meets the requirements from Chapter 2 of the AI Act. For most high-risk AI systems, an internal control procedure according to Annex VI applies, whereby the provider assesses conformity themselves. For specific systems such as remote biometric identification systems, a third-party conformity assessment according to Annex VII applies.
The modularity of conformity assessments is an important aspect that is often overlooked. Article 43(4) determines that when a high-risk AI system consists of multiple components that have each undergone their own conformity assessment, the final provider is only responsible for integration and a limited conformity assessment of the complete system.
For organizations integrating AI systems from different components, the modular approach can offer significant compliance advantages, provided the integration is carefully documented and assessed.
CE marking and declaration of conformity
Article 48 requires that successful conformity assessments result in an EU declaration of conformity and CE marking. The declaration of conformity must contain specific information as defined in Annex V, including identification of the system, references to applied harmonized standards, and identification of the notified body if applicable.
GPAI models with systemic risk
For General Purpose AI (GPAI) models classified as systemic risk, special obligations apply under Article 55. The classification as a model with systemic risk follows from Article 51 and Annex XIII; inter alia a threshold of 10^25 floating point operations (FLOPs) for the compute used in training applies. Providers of such models must perform model evaluations according to standardized protocols, including adversarial testing to identify and mitigate systemic risks.
Risk management system - continuous vigilance throughout the entire lifecycle
Article 9 of the AI Act requires providers of high-risk AI systems to implement a comprehensive risk management system that remains operational throughout the entire lifecycle of the AI system. This system goes beyond a one-time assessment and requires continuous monitoring and adjustment.
Systematic risk identification
The risk management system must according to Article 9(2) identify known and reasonably foreseeable risks that the high-risk AI system may pose to health, safety or fundamental rights. This identification must take place both when used in accordance with intended purpose and in case of reasonably foreseeable misuse. The system must additionally evaluate other risks that may arise based on data collected by the post-market monitoring system.
Risk management measures
The identified risks must be addressed by appropriate and targeted risk management measures designed to address identified risks. Article 9(3) emphasizes that these measures should strive for elimination or reduction of risks as much as possible, while maintaining an appropriate balance between risk minimization and the expected functionality of the system.
Special attention to vulnerable groups
An important aspect of the risk management system is explicit attention to vulnerable groups. The system must evaluate whether the AI system can have negative impact on persons under 18 years or other vulnerable groups, whereby specific protection measures must be implemented where necessary.
Annex III classification assessment - determination of high-risk status
A crucial first step in the compliance process is the correct classification of AI systems as high-risk according to Article 6 and Annex III of the AI Act. This classification assessment determines which additional obligations apply and must be carried out with great care.
Categories of high-risk systems
Annex III defines eight main categories of high-risk AI systems, each with specific subcategories and nuances. These categories include biometric identification and categorization, critical infrastructure management, education and vocational training, employment and personnel management, access to essential services, law enforcement, migration and border control, and justice and democratic processes.
Category | Article Reference | FRIA Required |
---|---|---|
Biometric identification | Annex III, point 1 | Yes (government) |
Critical infrastructure | Annex III, point 2 | Exempted |
Education and training | Annex III, point 3 | Yes (government) |
Employment | Annex III, point 4 | Yes (government) |
Credit and insurance | Annex III, point 5 | Yes (all) |
Nuances and exceptions
It is important that not all systems within these categories are automatically considered high-risk. Article 6(3) offers the possibility for providers to demonstrate that their AI system does not pose significant risk to health, safety or fundamental rights, whereby it can be excluded from high-risk classification. This assessment requires thorough technical and legal analysis of the specific implementation and context.
GPAI model assessments - new requirements for foundation models
With the increasing prominence of General Purpose AI models, the AI Act introduces specific assessment requirements for GPAI models, with special attention to models with systemic risk as defined in Article 51.
Systemic risk threshold
A GPAI model is classified as a model with systemic risk when the cumulative amount of compute used for training is greater than 10^25 FLOPs, or when the model has similar capabilities as a result of technical breakthroughs. This quantitative threshold provides clarity but can be adjusted by the Commission based on technological developments.
Model evaluation requirements
Article 55(1)(d) requires providers of GPAI models with systemic risk to perform model evaluations according to standardized protocols and tools reflecting the state-of-the-art. These evaluations must include adversarial testing aimed at identifying and mitigating systemic risks. The Code of Practice for GPAI models (Article 56) provides detailed, voluntary guidance for implementing these obligations.
Safety and security framework
Providers must establish, implement and update a comprehensive safety and security framework describing how they assess and mitigate systemic risks throughout the entire lifecycle of the model. This framework must be regularly evaluated and adjusted based on new insights and technological developments.
The Code of Practice for GPAI models (Article 56) offers providers a voluntary compliance mechanism serving as an important guide for implementing AI Act obligations.
Post-market monitoring - continuous surveillance in practice
The post-market monitoring system, regulated in Article 72 of the AI Act, represents a fundamental shift toward continuous surveillance of AI systems after they are placed on the market. This system goes beyond traditional product surveillance and recognizes the dynamic nature of AI systems.
Systematic data collection
Providers must implement a post-market monitoring system that systematically collects and analyzes relevant data about the performance of the high-risk AI system throughout its lifetime. This data includes information about the operation of the system in real-world conditions, including deviations from expected performance, unintended effects, and feedback from users and stakeholders.
The monitoring system must be designed to identify trends and patterns that may indicate deterioration of performance, bias, or other risks that were not fully anticipated during the initial risk assessment. This information must then be used to update the risk management system and implement corrective measures where necessary.
Incident reporting
A crucial component of post-market monitoring is the obligation to report serious incidents to relevant authorities. Article 73 requires providers to promptly report serious incidents to market surveillance authorities. These include events that directly or indirectly lead to death, serious injury, serious damage to health, or serious disruption of critical infrastructure.
Biometric identification by governments - enhanced requirements
Biometric identification systems, classified under Annex III point 1, are subject to particularly strict requirements due to their potential for privacy infringement and other fundamental rights. These systems require not only standard high-risk obligations but also additional safeguards.
Human oversight and additional safeguards
For biometric identification systems, the general human oversight requirements apply (Article 14) alongside strict, sector-specific safeguards. The AI Act does not prescribe a generic two‑person verification requirement; appropriate human oversight must be ensured.
FRIA obligations
All public bodies implementing biometric identification systems are required to conduct a FRIA before the system is put into use. This assessment must pay special attention to the proportionality of using biometric identification in relation to intended objectives and available alternatives.
The use of real-time remote biometric identification systems in publicly accessible spaces by public bodies is in principle prohibited under Article 5, with very limited exceptions that are strictly regulated.
Implementation timeline 2025-2027 - phased entry into force
The AI Act has a complex implementation timeline whereby different obligations come into force at different times. This phased approach gives organizations time to develop compliance systems but also requires careful planning.
2025 milestones
On February 2, 2025, the prohibitions on AI systems with unacceptable risks became effective, along with AI literacy obligations. On August 2, 2025, the governance rules and obligations for GPAI models became applicable, meaning that providers of foundation models are now fully subject to relevant obligations.
2026-2027 implementation
The rules for high-risk AI systems become fully applicable on August 2, 2026, two years after entry into force of the regulation. For high-risk systems embedded in regulated products, an extended transition period applies until August 2, 2027, providing additional time for compliance in complex product ecosystems.
Enforcement and fines
For GPAI models, obligations apply from August 2, 2025. Non‑compliance by GPAI providers may be fined up to 3% of worldwide annual turnover or €15 million (Article 101). Higher maxima up to 7%/€35 million apply only for certain serious infringements, such as prohibited practices.
Practical recommendations for compliance teams
For compliance and AI professionals who must implement these complex requirements, a systematic approach is essential. Begin with a thorough classification assessment to determine which systems qualify as high-risk and which assessments are therefore required.
Then develop integrated processes that coordinate the different assessment requirements. FRIAs and conformity assessments can provide complementary information but require different expertise and methodologies. Ensure that teams have both technical and legal expertise to adequately assess all aspects.
Implement robust documentation systems that not only demonstrate compliance but also facilitate continuous improvement. The dynamic nature of AI systems requires that assessments be regularly updated, which is only possible with adequate documentation and tracking of changes.
Most organizations will benefit from developing standardized templates and checklists for each type of assessment, ensuring consistency and reducing the compliance burden for future implementations.
Final thoughts
The EU AI Act introduces an unprecedentedly comprehensive framework of risk assessments that will fundamentally change how organizations develop, implement, and monitor AI. The complexity of these requirements underscores the importance of early preparation and systematic implementation.
Success in AI Act compliance requires not only understanding of individual assessment requirements but also insight into their interconnections and the broader governance structures they support. Organizations that proactively invest in robust compliance processes will not only mitigate legal risks but also gain competitive advantage by building stakeholder confidence in their AI implementations.
The phased implementation still provides opportunity for organizations to develop their compliance systems, but the time for action is increasingly short. With the full implementation of high-risk system requirements in August 2026, now is the time to take concrete steps toward full AI Act compliance.
This analysis is based on the final text of Regulation (EU) 2024/1689 and related implementation documents available as of August 2025. Given the dynamic nature of AI regulation, it is recommended to regularly consult updates from relevant authorities.