Responsible AI Platform
⚖️ DSA × AI Act

DSA & AI Act

Platform compliance on two fronts

Zahed AshkaraUpdated: June 2026~12 min read

Why platforms need to tackle DSA and AI Act simultaneously

Online platforms face a fundamental dual compliance obligation. The Digital Services Act (DSA) imposes requirements on content moderation, algorithmic transparency and the prevention of manipulative design practices. The AI Act imposes requirements on the systems platforms use to perform that moderation and personalisation. Both regulations apply simultaneously — and at crucial points they intersect.

The recommender systems that determine which content a user sees fall simultaneously under DSA Article 27 (explainability of recommender systems) and the AI Act (transparency requirements for AI systems). The AI tools platforms deploy for automated content moderation are covered by DSA procedural safeguards and affected by the AI Act transparency obligations in Article 50. And the risk assessment that Very Large Online Platforms (VLOPs) must conduct for systemic risks overlaps significantly with the risk assessment framework of the AI Act.

This makes DSA × AI a particularly complex intersection: neither regulation can be fully understood without the other. Platforms that focus exclusively on DSA miss the AI Act layer in their moderation and recommendation infrastructure. Platforms that focus exclusively on the AI Act miss the platform-specific obligations of the DSA. The smartest approach is a coordinated compliance programme that addresses both frameworks simultaneously.

What is the Digital Services Act?

The Digital Services Act (Regulation (EU) 2022/2065) became fully applicable to all in-scope service providers on 17 February 2024. The DSA creates a harmonised European framework for intermediary service providers — from simple hosting providers to the largest social media platforms and search engines. The central principle is that what is prohibited offline must also be prohibited online, and that platforms bear responsibility for how their systems shape the information environment of users.

The regulation introduces a layered structure of obligations. All intermediary service providers receive basic transparency and accountability obligations. Hosting providers receive additional requirements for reporting illegal content and handling complaints. Online platforms add obligations for content moderation, recommender systems and advertising transparency. And the Very Large Online Platforms and Very Large Online Search Engines (VLOPs and VLOSEs) — with more than 45 million monthly active users in the EU — receive the heaviest category of obligations, including mandatory systemic risk assessments and external audits.

For AI systems the DSA is particularly relevant because virtually all forms of algorithmic personalisation, automated content decisions and profile-based targeting fall directly in scope. Platforms deploying AI for content moderation, recommendations or advertising selection operate at the intersection of both regulatory frameworks. The European Commission directly supervises compliance for VLOPs and VLOSEs; for smaller platforms, national Digital Services Coordinators (DSCs) are responsible. In the Netherlands, the Authority for Consumers and Markets (ACM) has been designated as the DSC.

The 5 intersections of DSA and AI Act

Below we describe the five concrete points where DSA and AI Act intersect. For each intersection we explain what the obligation entails, which articles apply, and what this means for your compliance approach as a platform.

01

Algorithmic transparency

DSA Art. 27 × AI Act Art. 13-14

DSA Article 27 requires online platforms that use recommender systems to explain in their terms of service the parameters the system uses, the possibility for users to adjust those parameters, and at least one option not based on profiling. These are broad transparency requirements aimed at comprehensibility for ordinary users — the emphasis is on what the system does and how users can exercise control over it.

The AI Act adds a different layer. Article 13 requires providers of high-risk AI systems to give those who deploy the system (deployers) sufficient information to interpret and assess the output. Article 14 sets requirements for human oversight: deployers must be able to intervene, stop the system and correct decisions. Recommender systems that can have significant effects on users — think news recommendations with impact on political opinion formation — may qualify as high-risk AI. Then both transparency regimes apply simultaneously: the DSA explanation to users and the AI Act documentation for deployers. The challenge is that the two regimes serve different audiences: users versus deployers.

In practice: In practice: document for each recommender system the parameters and operation in two versions — a comprehensible user version for the DSA obligation (Art. 27) and a technical deployer version for the AI Act (Art. 13). Verify whether the system qualifies as high-risk and if so establish human oversight procedures (Art. 14).

02

AI content moderation

DSA Art. 14-15 × AI Act Art. 50

DSA Articles 14 and 15 require hosting providers and online platforms to establish mechanisms by which users can report illegal content (notice-and-action) and to report on measures taken in response to such notifications in transparency reports. Platforms using automated systems for content moderation must explain how those systems work, how often they make mistakes, and what safeguards exist for human review.

AI Act Article 50 introduces a specific transparency obligation for AI systems that interact with people or generate content: users must be informed that they are communicating with an AI system or that content is AI-generated, unless this is obvious from the context. For content moderation AI that makes decisions about removing, hiding or demoting user contributions, the person whose content is affected additionally has a right to an explanation — an obligation fed by both the DSA (through the complaint procedure) and the AI Act (through transparency requirements). Automated content decisions without a human review option are problematic under both regulatory frameworks.

In practice: In practice: ensure your AI moderation systems have a clear internal audit trail (which decision, on the basis of which parameters, when) and an externally visible appeal ground for users. Combine the DSA complaint procedure with the AI Act transparency obligation into a single integrated workflow — that way you do not need to build two separate systems.

03

Risk assessments for VLOPs

DSA Art. 34-35 × AI Act risk framework

DSA Articles 34 and 35 require VLOPs and VLOSEs to conduct annual systemic risk assessments. Those assessments must identify the risks the platform poses for societal themes such as democracy, public health, safety and fundamental rights — and must demonstrate what risk mitigation measures the platform takes. External auditors then verify whether those measures are adequate. This is a broad, systems-level assessment that goes beyond individual cases.

The AI Act works with a different risk framework: for each AI system it is determined whether it is high-risk, limited-risk or minimal-risk, and on that basis specific obligations apply (technical file, conformity assessment, registration in the EU database). For platforms this means that the same algorithmic infrastructure — the recommender system, the moderation tool, the advertising platform — is simultaneously subject to the DSA systemic risk assessment and the AI Act risk classification per system. The two assessments overlap but are not identical: the DSA looks at societal systemic effects, the AI Act at risks for individual users per system.

In practice: In practice: use the AI Act risk classification per system as a building block for the broader DSA systemic risk assessment. Document which AI systems are high-risk and incorporate those findings into the DSA risk assessment. This creates one integrated risk analysis that both supervisors can use — and avoids duplicate work in the annual DSA audit.

04

Dark patterns and manipulation

DSA Art. 25 × AI Act Art. 5

DSA Article 25 prohibits online platforms from using so-called "dark patterns": design and interaction techniques that deceive, manipulate or nudge users into making choices that are not in their interest. Think of misleading button placement, hidden unsubscribe options, false urgency messages or the automatic reset to an algorithmic timeline after choosing chronological — a practice recently characterised as a DSA violation by a Dutch court in the case against Meta.

AI Act Article 5 prohibits AI systems that exploit subliminal techniques, exploitation of vulnerabilities or other manipulative methods to significantly influence people's behaviour in a way that harms their interests. These are prohibited practices — the most severe category in the AI Act, with fines up to €35 million or 7% of global annual turnover. The overlap is precisely at the point where AI systems are used to implement or amplify dark patterns: an algorithm that deliberately identifies vulnerable users to expose them more often to engagement-driving content falls under both prohibitions simultaneously. This is not a theoretical risk — it is precisely what regulators such as the European Commission are investigating at TikTok and Meta.

In practice: In practice: perform a dual test for each personalised design element — notifications, recommendations, interface choices: is this a dark pattern under DSA Art. 25? And does it involve AI manipulation under AI Act Art. 5? Record the outcome as part of your risk documentation. Ensure product and tech teams are aware of both prohibitions.

Read the Meta DSA ruling on dark patterns →
05

Advertising and profiling

DSA Art. 26 × AI Act + GDPR

DSA Article 26 requires online platforms to make clear for every advertisement shown to a user that it is an advertisement, on whose behalf it is shown, and on the basis of which parameters the user was selected as a target. For VLOPs additional requirements apply: they must maintain a searchable register of all advertisements they display — the so-called advertising repository. And targeted advertising based on special categories of personal data (religion, political views, health) is explicitly prohibited for VLOPs.

AI-driven advertising selection touches three regulatory frameworks simultaneously here. The DSA sets transparency requirements and prohibits certain forms of targeting. GDPR requires a valid legal basis for the profiling underlying that targeting — and for special categories explicit consent is required. The AI Act adds that AI systems for advertising selection that have significant effects on individuals may be classified as high-risk, with all the documentation and transparency obligations that entails. Platforms combining AI recommendations with advertising targeting must address all these layers simultaneously in their compliance architecture.

In practice: In practice: for each AI-driven advertising system, map which personal data are processed (GDPR), what transparency is required (DSA) and whether the system qualifies as high-risk (AI Act). Do not treat this as three separate compliance tracks but as one integrated data flow assessment. The advertising repository required by DSA can also serve as part of the AI Act documentation obligation.

What do you need to double-regulate?

Below is a practical checklist of measures that both DSA and the AI Act require of platforms. These are the points where an integrated approach delivers the most value.

Transparency documentation for recommender systems

Art. 27 DSA (explanation of parameters and user options for recommender system)⚖️ Art. 13 AI Act (transparency and information obligations for high-risk AI)

Human oversight of automated content decisions

Art. 17 DSA (internal complaint mechanism + human review)⚖️ Art. 14 AI Act (human oversight for high-risk systems)

Systemic risk assessment (VLOPs and VLOSEs)

Art. 34-35 DSA (annual systemic risk assessment + external audit)⚖️ Risk classification per AI system + conformity assessment

Verification of prohibition on manipulative AI practices

Art. 25 DSA (prohibition on dark patterns)⚖️ Art. 5 AI Act (prohibition on prohibited AI practices)

Advertising transparency and targeting restrictions

Art. 26 DSA (advertising labelling and targeting parameters) + advertising repository for VLOPs⚖️ AI Act high-risk requirements for advertising AI + GDPR legal basis for profiling

Notice-and-action procedure for AI-generated illegal content

Art. 14-15 DSA (notice-and-action mechanism)⚖️ Art. 50 AI Act (transparency about AI-generated content)

Technical file and registration for high-risk AI systems

DSA transparency reports + audit support⚖️ Art. 11 + 18 AI Act (technical documentation + registration in EU database)

Complaint and appeal procedure for users

Art. 17-20 DSA (complaint handling, appeal, out-of-court dispute resolution)⚖️ Art. 86 AI Act (right to explanation for impactful AI decisions)

How compliant is your platform on both fronts?

The AI Readiness Score tests your organisation specifically on the intersections between the AI Act and related legislation such as the DSA. You get a score per theme and concrete recommendations for platforms.