Responsible AI Platform
📜 GDPR × AI Act

GDPR & AI Act

Two regulations, one compliance challenge

Zahed AshkaraUpdated: June 2026~12 min read

Why you need to think about GDPR and the AI Act together

Many organisations treat GDPR compliance and AI Act compliance as two separate tracks. That is understandable — they were introduced by different directorates, they have different deadlines, and they are enforced by different authorities. But when you look at AI systems that process personal data, you find that the requirements overlap at multiple points.

The AI Act explicitly references GDPR. Article 10 (data governance) presupposes that you already implement GDPR principles. Article 13 (transparency) builds on the information obligations from GDPR. And for high-risk AI systems that make decisions about people, both Art. 22 GDPR and the AI Act requirements for human oversight apply simultaneously.

This means: if you focus only on the AI Act, you miss GDPR obligations. If you focus only on GDPR, you miss the new AI Act layer. The smartest approach is integrated compliance — one assessment, two legal frameworks.

What is the GDPR?

The General Data Protection Regulation (GDPR) has applied across the EU since 25 May 2018. It regulates how organisations may collect, process and store personal data. The core principles are data minimisation, purpose limitation, transparency, integrity and confidentiality.

For AI systems, GDPR is particularly relevant because almost every AI system directed at users processes personal data. Think of names, location data, behavioural patterns, biometric features. GDPR requires a valid legal basis for every processing activity, and for automated decision-making additional protective requirements apply.

In the Netherlands, the Dutch Data Protection Authority (AP) supervises compliance with GDPR. The AP has special powers: it may impose fines up to €20 million or 4% of global annual turnover, whichever is higher.

The 5 intersections of GDPR and AI Act

Below we describe the five concrete points where GDPR and AI Act intersect. For each intersection we explain what the obligation entails, which articles apply, and what this means for your compliance approach.

01

Automated decision-making

GDPR Art. 22 × AI Act high-risk (Annex III)

Article 22 of GDPR gives people the right not to be subject solely to automated processing that has significant effects on them. This right is absolute for decisions that "significantly affect" someone. Think of loan applications, recruitment processes, social benefits or medical triage. In such cases GDPR always requires the possibility of human intervention, the right to object and the right to an explanation.

The AI Act adds a layer to this: systems that apply automated decision-making in the sectors listed in Annex III (credit, education, HR, essential services, etc.) are classified as high-risk AI systems. For those systems, obligations apply regarding registration, transparency, human oversight and technical robustness. The overlap is precisely where Art. 22 GDPR is already in force.

In practice: In practice: perform a dual test for each automated decision process — Art. 22 GDPR (is human intervention guaranteed?) and AI Act (is the system high-risk and does it meet technical requirements?).

02

DPIA versus FRIA

GDPR Art. 35 × AI Act Art. 27

Under GDPR you are required to perform a Data Protection Impact Assessment (DPIA) when processing is likely to result in a high risk to the rights and freedoms of people. This is almost always the case with large-scale processing of special categories of personal data or profiling with significant effects.

The AI Act introduced a Fundamental Rights Impact Assessment (FRIA) in Article 27. Unlike a DPIA, a FRIA looks at the full spectrum of fundamental rights — not just privacy. The FRIA is mandatory for public bodies and private organisations providing services of public interest (education, healthcare, social services) that deploy high-risk AI. The European Commission is developing a template to integrate DPIA and FRIA into a single combined assessment.

In practice: In practice: plan DPIA and FRIA simultaneously for new AI projects. Use overlapping questions about data protection in the FRIA as input for the DPIA, so you do no duplicate work. See also our detailed comparison.

DPIA vs FRIA: complete comparison →
03

Transparency & explainability

GDPR Art. 13-14 × AI Act Art. 13

GDPR requires organisations to inform data subjects about processing of their personal data. For automated decision-making (Art. 22) you must additionally provide "meaningful information about the underlying logic". This is not an empty obligation — regulators expect people to understand on the basis of which variables a decision was reached.

AI Act Article 13 imposes a similar but broader transparency obligation on high-risk AI systems: users must be able to interpret and assess the output of the system. Documentation requirements (technical file, logs) serve partly as the basis for that explainability. The challenge is that GDPR requires "meaningful explanation to the data subject", while the AI Act requires "interpretability for the deployer" — these are partly different audiences with partly different information needs.

In practice: In practice: develop for AI systems that make decisions about people an explainability layer that serves both levels: a clear summary for the data subject (GDPR) and a technically substantiated explanation for internal control (AI Act).

04

Data governance

GDPR data minimisation × AI Act Art. 10 training data

GDPR prescribes data minimisation: you may not process more personal data than strictly necessary for the purpose. This is a fundamental principle that sits in tension with the tendency of machine learning to gather as much data as possible for better model performance.

AI Act Article 10 sets specific requirements for training data, validation data and test data of high-risk AI systems. The data must be relevant, representative, free from errors and complete. Furthermore Art. 10 requires control of bias — which amounts to an active duty to verify that data does not contain discriminatory patterns. This requires categories of personal data that are specially protected under GDPR (ethnicity, gender, health). There is thus an inherent tension: Art. 10 requires data analysis on sensitive categories, while GDPR strictly restricts that processing.

In practice: In practice: explicitly document which legal basis (generally scientific research or legitimate interest) you use for bias analysis on sensitive categories of training data. Ensure the scope of data collection remains limited to what is necessary for that bias check — minimisation remains the principle.

Read more about Article 10 data governance →
05

Supervision & enforcement

Dutch DPA as market supervisory authority

In the Netherlands, the Dutch Data Protection Authority (AP) has been designated as the national market supervisory authority for the AI Act. That is a deliberate choice: the AP already has expertise in assessing data processing operations and automated systems. But it also means the AP now enforces two legal frameworks simultaneously — GDPR and AI Act.

This has practical consequences. A complaint about an AI system making personal decisions can be investigated by the AP from two angles simultaneously: violation of Art. 22 GDPR (no human oversight) and violation of the AI Act (high-risk system not registered or without technical file). The AP can enforce and impose fines on both grounds. The total penalty risk is therefore significantly greater than with a single supervisory framework.

In practice: In practice: treat the AP as the supervisor at the intersection of privacy and AI. Ensure that for every AI system making decisions about people, documentation is in order for both frameworks: a records of processing activities (GDPR) and a technical file (AI Act).

What do you need to double-regulate?

Below is a practical checklist of measures that both GDPR and the AI Act require. These are the points where an integrated approach delivers the most value.

Legal basis for data processing in the AI system

Art. 6 GDPR (processing) + Art. 9 GDPR (special categories)⚖️ Art. 10 AI Act (data governance for training data)

Transparency documentation for decisions about individuals

Art. 13-14 GDPR (information obligations) + Art. 22(3) (explanation for automated decisions)⚖️ Art. 13 AI Act (transparency and information obligations for high-risk AI)

Procedure for human review of automated decisions

Art. 22(2)(b) GDPR (right to human intervention)⚖️ Art. 14 AI Act (human oversight for high-risk systems)

Impact assessment before deployment

Art. 35 GDPR (DPIA required for high risk)⚖️ Art. 27 AI Act (FRIA required for public bodies and certain private deployers)

Records of processing activities / technical file

Art. 30 GDPR (records of processing)⚖️ Art. 11 + 18 AI Act (technical documentation + registration in EU database)

Bias detection and discrimination monitoring

Art. 9 GDPR + accuracy principle (Art. 5(1)(d))⚖️ Art. 10(2) AI Act (representativeness, error-free, no discriminatory patterns)

Security measures for the AI system

Art. 32 GDPR (security of processing)⚖️ Art. 15 AI Act (accuracy, robustness and cybersecurity of high-risk AI)

Procedure for data breaches and AI incidents

Art. 33-34 GDPR (data breach notification)⚖️ Art. 73 AI Act (notification of serious incidents for high-risk AI)

How compliant are you on both fronts?

The AI Readiness Score tests your organisation specifically on the intersections between the AI Act and related legislation such as GDPR. You get a score per theme and concrete recommendations.