DPIA vs FRIA: practical guide for compliance in 2025

14 min read
Dutch version not available

Navigating the overlap and differences between privacy and AI impact assessments

Important development: Organizations deploying high-risk AI systems face two different but related impact assessments: the DPIA from the GDPR and the new FRIA from the AI regulation. These assessments overlap but also have their own specific focus and requirements.

Why two different impact assessments?

In 2016, the GDPR introduced the Data Protection Impact Assessment (DPIA) as an instrument to identify and limit risks to fundamental rights and freedoms arising from data processing activities. With the EU AI regulation, which entered into force in August 2024, the Fundamental Rights Impact Assessment (FRIA) has been added, specifically developed for high-risk AI systems.

This dual obligation arises because AI systems can pose broader risks than just data protection. Where a DPIA concentrates on privacy-related risks, a FRIA looks at the full spectrum of fundamental rights that can be affected by AI.

DPIA: data protection central

When is a DPIA mandatory?

Under Article 35 of the GDPR, you must conduct a DPIA when processing is "likely to result in a high risk to the rights and freedoms of natural persons." This applies specifically when you:

  • Systematically and comprehensively assess personal aspects of people
  • Do this based on automated processing of personal data, including profiling
  • Base decisions on this that have consequences for people

Minimum content DPIA

Article 35 of the GDPR requires that a DPIA contains at least:

  • Systematic description of processing activities and purposes
  • Assessment of necessity and proportionality of processing
  • Assessment of risks to rights and freedoms of data subjects
  • Measures to address identified risks

DPIA timing

A DPIA must be conducted before the start of data processing activities. Ideally, you conduct the DPIA during the planning phase of your project, not afterwards.

FRIA: broader fundamental rights focus

Legal basis and purpose

The Fundamental Rights Impact Assessment (FRIA) is established under Article 27 of the EU AI Act as a comprehensive tool to assess potential impacts on fundamental rights before deploying high-risk AI systems. Unlike the DPIA which focuses primarily on data protection, the FRIA takes a human-centric approach by examining all relevant fundamental rights that could be affected by AI systems - including human dignity, non-discrimination, freedom of expression, access to justice, and others.

This broader scope reflects the EU's recognition that AI systems can impact citizens' lives beyond privacy concerns, requiring a more holistic assessment of fundamental rights implications.

When is a FRIA mandatory?

Article 27 of the AI regulation obliges two specific categories of users (deployers) to conduct a FRIA - not the developers (providers) of AI systems:

Category 1: Public bodies and entities providing services of public interest

This category encompasses all public institutions (bodies governed by public law) and private entities providing services of public interest. The scope is deliberately broad and includes organizations operating in education such as schools, universities, and training institutions, healthcare providers including hospitals, medical centers, and health insurers, social services organizations like welfare agencies and employment services, housing providers including social housing organizations and rental agencies, and institutions involved in justice and democratic processes such as courts and electoral systems.

The AI Act deliberately uses a broad interpretation of "services of public interest" to capture any private organization that provides services reasonably affecting the public interest, meaning that utility companies providing essential services like water or energy distribution could also fall under this category.

Critical infrastructure exception: Utility companies (water, energy, transport) and other critical infrastructure operators are exempt from FRIA obligations when using AI systems specifically as safety components for managing and operating critical infrastructure (Annex III point 2). However, if they use other high-risk AI systems outside this specific context (e.g., for HR decisions, customer credit assessments), the FRIA obligation does apply.

Category 2: Financial risk assessment systems (all deployers)

The second category applies universally to any organization - whether public or private - that uses high-risk AI systems for creditworthiness assessment or credit scoring of natural persons, or for risk assessment and premium calculation in life and health insurance. This means that banks, financial institutions, and insurance companies using such AI systems must conduct FRIAs regardless of their organizational structure or whether they provide public services.

Important exception: AI systems used exclusively for detecting financial fraud are explicitly excluded from FRIA requirements, even within financial institutions.

Which AI systems require a FRIA?

The FRIA obligation only applies to high-risk AI systems falling under Article 6(2) of the AI regulation. These are systems that pose significant risks to fundamental rights and include AI applications in administration of justice and democratic processes, systems controlling access to education and vocational training, AI used for employment and personnel management decisions, systems managing access to essential services, biometric identification and categorization technologies, and AI systems used in migration, asylum and border control processes.

The common thread among these systems is their potential to significantly impact individuals' fundamental rights and life opportunities, which is why the EU has subjected them to enhanced scrutiny through the FRIA requirement.

Important: Not all high-risk AI systems require a FRIA. Key exclusions include:

  • Systems intended as safety components of products under EU harmonization legislation (Article 6.1)
  • Critical infrastructure safety systems: AI systems used as safety components for managing and operating critical infrastructure in digital infrastructure, transport, and utilities (water, gas, heat, electricity) - Annex III point 2
  • Note: The same organization may still need FRIA for other high-risk AI applications outside critical infrastructure safety

FRIA process and requirements

A FRIA must be conducted before first use of the high-risk AI system, ensuring that potential fundamental rights impacts are assessed and mitigated before deployment. Unlike continuous monitoring requirements, the FRIA needs to be updated only when relevant elements change in the AI system's deployment or risk profile. After completion, the FRIA results must be reported to the market surveillance authority using a standardized template that will be published by the AI Office.

The assessment process involves a comprehensive evaluation of the AI system's intended use, the frequency and context of deployment, the categories of individuals who may be affected (directly or indirectly), potential risks to fundamental rights including discrimination and other human rights violations, and the mitigation measures planned to address identified risks, including human oversight mechanisms and complaint procedures.

Reporting requirements are mandatory for all completed FRIAs, using a standardized questionnaire format that the AI Office will provide. In exceptional circumstances involving urgent public safety concerns, protection of human lives, or critical infrastructure security, supervisory authorities may temporarily grant exemptions from the notification requirement. However, such exemptions are strictly temporary, and organizations must complete the normal FRIA procedure and fulfill reporting obligations as soon as the emergency situation is resolved.

Practical differences between DPIA and FRIA

AspectDPIA (GDPR)FRIA (AI regulation)
FocusData protection and privacyAll fundamental rights
Data typePersonal data onlyPersonal and non-personal data
ScopeAll high-risk data processingSpecific high-risk AI systems
Who obligatedData controllersCertain categories of deployers
ReportingInternal (except prior consultation)Mandatory notification to supervisor

Overlap and complementarity

Integrated approach possible

The AI regulation acknowledges the overlap between both assessments. Article 27(4) states that a FRIA can complement an existing DPIA when both are required. In practice, this means organizations can choose:

  1. Two separate assessments: Separate DPIA and FRIA documents
  2. Integrated assessment: One combined document meeting both sets of requirements

Conditions for integration

For successful integration, both assessments must:

  • Cover all DPIA requirements from Article 35 GDPR
  • Contain all FRIA elements from Article 27 AI regulation
  • Address the broader scope of fundamental rights (not just data protection)

Practical tip for integration

Start with your existing DPIA template and expand it with FRIA elements such as non-discrimination, fairness, transparency and other relevant fundamental rights that may be affected by your AI system.

Fundamental rights: more than just privacy

Broader scope of FRIA

Where a DPIA primarily focuses on Article 8 of the EU Charter of Fundamental Rights (right to data protection), a FRIA must evaluate a much broader spectrum of rights. These include human dignity (Article 1), which forms the foundation of all other rights, equality before the law (Article 20) ensuring fair treatment regardless of personal characteristics, non-discrimination (Article 21) preventing unfair bias in AI decision-making, cultural, religious and linguistic diversity (Article 22) protecting minority rights and cultural expression, and the right to effective legal protection (Article 47) ensuring individuals can challenge AI decisions that affect them.

Practical challenge

This broader scope makes FRIAs significantly more complex than DPIAs. Organizations must develop comprehensive knowledge about different fundamental rights beyond data protection, assemble multidisciplinary teams combining legal, technical, and ethical expertise, and develop systematic approaches to evaluate all relevant rights that could be impacted by their AI systems. This requires a shift from the relatively well-established privacy impact assessment methodology to a more holistic evaluation framework that many organizations are still developing.

Implementation in 2025: what to expect

Templates and support

The AI Office will publish a standardized FRIA template, similar to existing DPIA templates. This template will likely include a standard questionnaire for comprehensive fundamental rights evaluation, detailed guidelines for systematic risk identification and assessment across multiple rights categories, and standardized formats for reporting assessment results to supervisory authorities. These tools will help organizations navigate the complexity of fundamental rights assessment in a consistent and structured manner.

Timing and deadlines

Organizations already active with AI systems need to take a three-pronged approach. For existing systems, they must evaluate whether their current AI deployments fall under FRIA obligations and conduct assessments where required. For new systems, the FRIA process should be integrated into development workflows from the earliest stages, ensuring fundamental rights considerations are built into system design rather than added as an afterthought. Additionally, organizations need to establish ongoing monitoring processes to plan regular updates of their FRIAs when system parameters, use contexts, or risk profiles change.

Strategic approach: Start now by mapping your AI systems and their potential impact on fundamental rights. This gives you a head start on formal FRIA requirements and templates.

Compliance strategy: managing dual assessments

For organizations with both obligations

Many organizations will face both assessments and need to develop an integrated compliance strategy. This begins with creating a comprehensive inventory that maps all data processing activities and AI systems to understand the full scope of assessment requirements. Following this, organizations should conduct a thorough gap analysis to identify where DPIA and FRIA requirements overlap and where they differ, enabling more efficient resource allocation.

Where possible, organizations should focus on template development that creates integrated assessment frameworks meeting both sets of requirements, while designing streamlined processes that efficiently facilitate both types of assessments without duplicating effort. Finally, competency building becomes crucial, requiring training for compliance teams in the broader spectrum of fundamental rights evaluation beyond traditional data protection expertise.

Risk management approach

Impact assessments should be treated as integral components of broader organizational risk management rather than standalone compliance exercises. This means integrating DPIA and FRIA processes into existing project management workflows, linking them to established compliance frameworks like ISO standards or sector-specific regulations, and using assessment outcomes as key inputs for strategic AI implementation decisions. Organizations should also establish systematic monitoring and updating processes based on new insights, technological developments, and evolving regulatory guidance.

Supervision and enforcement

Different supervisory authorities

DPIAs fall under supervision of the Data Protection Authority, while FRIAs are reported to the market surveillance authority. These different reporting lines create practical challenges for organizations, requiring them to understand different supervisory expectations and procedures, develop tailored communication approaches for each authority, and potentially manage different update cycles and reporting formats. This dual supervision structure reflects the different regulatory origins of the two assessment types but can create coordination challenges in practice.

Sanctions and compliance

Non-compliance with assessment requirements can result in significant financial penalties. Under the GDPR, DPIA violations can lead to fines up to 4% of global annual turnover, while the AI regulation imposes even higher stakes with FRIA non-compliance potentially resulting in fines up to €35 million or 7% of global annual turnover, whichever is higher. These substantial penalty levels underscore the importance both regulations place on proactive risk assessment and fundamental rights protection.

Looking forward: developments in 2025

Standardization and best practices

The year 2025 will likely bring significant developments in FRIA implementation. Organizations can expect the publication of official FRIA templates by the AI Office, providing much-needed standardization and clarity on assessment requirements. Sector-specific guidelines will emerge to address the unique challenges of different industries, from healthcare to financial services. We'll also see increased harmonization between different EU member states as national regulators align their approaches, and continued evolution toward greater integration between DPIA and FRIA processes as organizations and regulators gain practical experience with dual assessment requirements.

Technological support

The compliance technology market will likely respond to these new requirements with innovative solutions. We can expect automated tools that streamline the impact assessment process, making it easier for organizations to conduct comprehensive fundamental rights evaluations. AI-driven risk analysis platforms will emerge to help identify potential rights impacts across complex AI systems, while integrated compliance dashboards will provide organizations with unified views of their DPIA and FRIA obligations. Additionally, sector-specific assessment frameworks will be developed to address the unique challenges and risk profiles of different industries, from healthcare and education to financial services and public administration.

Practical steps for 2025

  1. Audit your current AI systems for FRIA requirements
  2. Develop integrated assessment templates
  3. Train your compliance team in fundamental rights evaluation
  4. Establish procedures for ongoing monitoring
  5. Prepare reporting processes to relevant authorities

Conclusion

The introduction of FRIA alongside existing DPIA obligations marks an important shift in how we think about AI governance. Where data protection has long been the primary lens for privacy-related risks, the EU now recognizes that AI systems can affect a broader spectrum of fundamental rights.

For organizations, this means an expansion of compliance obligations, but also an opportunity to develop more holistic risk management. By viewing DPIA and FRIA not as separate obligations but as complementary instruments for responsible technology implementation, organizations can develop more effective governance structures.

The coming months will be decisive for how these new instruments function in practice. Organizations that now invest in understanding and implementing integrated impact assessments position themselves not only for compliance but also for more sustainable and ethical AI implementation.


The FRIA obligations from the AI regulation will apply from August 2026. Organizations will then have concrete obligations to assess fundamental rights impact before implementing high-risk AI systems. Effective integration with existing DPIA processes is increasingly becoming a strategic necessity.