Dutch AI sandbox: how it will work and what benefits it offers

13 min read
Dutch version not available

The AI Regulation requires each member state to have at least one operational AI regulatory sandbox by August 2026. The Netherlands has now published a detailed design proposal by the Dutch Data Protection Authority (AP) and the Dutch Digital Infrastructure Inspection (RDI), together with other supervisors and ministries. This proposal builds on exploration work in 2023 and a pilot in 2024 that handled 40 questions from 25 organizations. In this blog, we explain what this Dutch sandbox aims to achieve, how the process works, who gets which role, and what your organization can practically prepare.

Why a sandbox?

The core is simple: providers of AI systems can directly consult with supervisors during development about compliance with the AI Regulation, so products can reach the market with less uncertainty and delay. The sandbox increases legal certainty, stimulates innovation, provides input for better regulation, and documents workable practices that can serve as examples. Importantly, the instrument is not a free pass: rules are not suspended and the provider remains responsible for product conformity.

A second goal is knowledge building among supervisors. By looking along at an early stage, they see more quickly where interpretation issues lie and which technical and organizational choices work. This helps later in regular supervision and in developing publicly available explanations and FAQs.

One national multi-sectoral sandbox, one portal

The Netherlands chooses one central, multi-sectoral sandbox where all relevant market supervisors are connected. Questions are submitted through one digital portal. General questions receive a central answer. Questions requiring sector-specific expertise are forwarded to the appropriate supervisor. For providers, this saves searching and coordination hassles; for supervisors, it ensures coherence in explaining the AI Regulation.

The sandbox does not offer its own testing facilities or computing capacity. The emphasis is on legal and technical-legal guidance for compliance. Where needed, references are made to existing facilities in the innovation landscape, such as Testing and Experimentation Facilities and European Digital Innovation Hubs. This keeps the sandbox accessible and usable for a broad group, without tying up public resources in expensive infrastructure.

The six phases of the process

The Dutch process consists of six consecutive phases. In practice, you can see this as a funnel: from broad intake to focused deepening.

PhaseNameDescriptionOutcome
1Pre-registrationThrough the website, you find explanations, available guidelines, and a form to submit your question. Check whether question concerns AI Regulation and answer not already available.Question forwarded or rejected
2Registration & selectionCore Team performs triage. Routes: written response, expert question to sector specialist, or sandbox trajectory for complex questions.Written answer or trajectory assignment
3PreparationTogether with supervisor establish sandbox plan: goals, activities, timeline, conditions and agreements about publication of results.Approved sandbox plan
4ParticipationWork sessions with legal and technical expertise to test against AI Regulation requirements. Adjustment expected when regulatory tensions arise.Tested AI system or termination
5EvaluationRecord results in final report with activities, findings and learning points. Possible written evidence for conformity assessment.Final report and possible certificate
6Post-participationPublic sharing of learning results as anonymized FAQs or public summary so broader ecosystem benefits.Public knowledge sharing

Testing in real-world conditions

The AI Regulation provides scope to test certain high-risk AI systems in real-world conditions without this already counting as "placing on the market" or "putting into service." Within a sandbox trajectory, this is possible, provided there are appropriate safeguards for fundamental rights, health and safety, and provided the same level of protection is offered as required by law outside the sandbox. The advantage is evident: you collect robust, representative data to verify whether your system complies, before you formally enter the market.

Who does what? Roles and responsibilities

Core Team This team, formed by the coordinating market supervisors, is the first point of contact. It answers simple questions, distributes expert questions and decides which cases lend themselves to a trajectory. The Core Team also ensures knowledge sharing and annual public reporting on the functioning of the sandbox.

Market surveillance authorities For product groups and application areas from the AI Regulation, designated market surveillance authorities take on the role of handling supervisor. They guide expert questions and trajectories, maintain contact with you as the questioner and ensure the quality and consistency of answers.

Sectoral and domain-specific supervisors If supervisors already exist in your sector, they can be involved if their knowledge is needed. Think of mobility, healthcare, or financial services. The extent to which this happens follows the role they also get in broader AI supervision.

Portal team This team manages the website and CRM process and performs the initial screening of questions. This keeps intake manageable and incomplete or inappropriate questions are quickly filtered out.

Fundamental rights supervisors and data protection Where a question touches on fundamental rights or where personal data is processed, involvement of the relevant authorities plays a role. The data protection authority is explicitly party to the sandbox when AI systems process personal data. In scenarios with heightened fundamental rights risks, fundamental rights supervisors can initiate investigations or receive notifications of incidents.

Interdisciplinary team Besides lawyers, technical AI experts are needed who understand systematics, data flows, model behavior and evaluation methods. This mix is essential to formulate the question sharply and make the answers concrete and applicable.

Which questions belong in the sandbox?

The sandbox is intended for questions about compliance with the AI Regulation for systems that (possibly) must meet requirements under that regulation. Think of high-risk applications from Annex III, applications with transparency obligations and general-purpose systems, insofar as they do not fall under direct supervision of the European AI Office. Other legislation is only included if relevant in the context of the AI Regulation or if the AI Regulation requires it, for example with data protection.

Not every question requires a trajectory. Many questions can be answered in writing once your context is clear. The design proposal therefore strongly emphasizes an extensive registration and selection phase with clear submission requirements.

Submission requirements and selection principles

To be handled, your question must be sufficiently concrete. Expect that you at least have the following ready:

  • a clear description of your AI system and usage context
  • your justification of the link with the AI Regulation and the specific obligations your question concerns

With scarce capacity, supervisors apply transparent selection principles for trajectories, in line with the goals in the regulation and future implementing acts of the European Commission. Start-ups and SMEs in the EU may get priority access. For written responses, the principle of order of arrival applies, with room to temporarily close or prioritize if the question backlog grows.

Example from practice

A developer builds a model for a shipping company that predicts when ship components need maintenance. The question is whether and how the AI Regulation applies and what requirements this entails.

The portal checks whether the question is complete and relevant and forwards it. The Core Team sees that sectoral knowledge is needed and involves the Human Environment and Transport Inspectorate. If the question proves more complex, a trajectory is started with a sandbox plan, sessions, and possibly testing in real-world conditions within agreed safeguards. After completion, there is a final report and, if everyone agrees, an anonymized public summary for the broader field.

What does this mean for your organization?

Those who want to offer an AI system soon can use the sandbox to get clarity faster and avoid expensive detours. This does require preparation.

Essential is a clear articulation of your question. You must be able to specify precisely which article or which set of obligations you want to clarify, in which usage context this applies, and with which assumptions you are working. Vague or overly broad questions lead to generic answers that have little practical value.

Product maturity plays a crucial role in the success of your sandbox trajectory. Your question must fit the development phase your AI system is in. Questions that are too early are often too abstract to provide concrete guidance, while questions that are too late offer little room to make fundamental adjustments. The ideal moment is when you have made architectural choices but still have room for adjustments.

Ensure that your documentation is completely in order before approaching the sandbox. This means you have a clear system overview, have mapped data flows, conducted risk assessments and developed a validation strategy. Additionally, you must have an initial view of user interaction and human oversight. This documentation forms the basis for productive conversations with supervisors.

When your AI system processes personal data or brings broader fundamental rights risks, coherence with existing procedures is crucial. Ensure that your privacy and fundamental rights analyses (DPIA/FRIA) seamlessly align with the question you want to submit in the sandbox. This prevents contradictory conclusions and ensures consistent compliance.

Finally, you must be open to sharing learning results. Expect that insights from your trajectory can be shared anonymously with the broader ecosystem. This is not only good for the development of AI governance in the Netherlands, but ultimately also helps your suppliers and customers with their own compliance trajectories.

Do not see the sandbox as a quality mark. There is no certification and the judgment does not replace the later conformity assessment. It is a way to address interpretation issues in time, with involvement of the right authorities.

Points of attention and pitfalls

Important to realize is that the sandbox offers no circumvention of rules. The instrument does not ease anything and grants no exemptions. You remain fully responsible for compliance with all applicable regulations, even during the sandbox trajectory.

Account for capacity limitations and timing. Intake may be temporarily limited when demand pressure is high. Therefore start early with your preparation and deliver complete, well-founded questions. Incomplete applications lead to delay or rejection.

Regarding scope, you must focus on the AI Regulation. Other legislation is only included if it is functional within this context. The sandbox is not intended as a general legal advice desk for all aspects of your AI implementation.

Stay alert for upcoming European implementing acts. At detail level, selection criteria and procedures will be further developed by the European Commission. The Dutch design is prepared for this, but can be adjusted as these implementing acts appear.

Without solid technical foundation, advice remains generic and not very useful. Ensure that your team has the necessary model and data details ready. Superficial technical descriptions lead to superficial legal clarification.

Outlook

With this design proposal, the Netherlands focuses on an accessible, coherent and knowledge-driven sandbox that shortens the path to compliance with the AI Regulation. One portal, clear role distribution and a process that enables both quick written clarification and deeper trajectories. For organizations, the message is: prepare your case well, anchor the question in the relevant obligations of the AI Regulation and use the sandbox to serve enterprise, users and society with AI that demonstrably complies with the rules. When the European implementing acts are published, fine-tuning follows. The foundation is there. You now decide how to use it.


Need help with AI sandbox preparation? 💡

Do you want to know how the Dutch AI sandbox can help your organization with compliance with the AI Regulation? Or do you have questions about preparing your AI systems for sandbox participation? Contact us for a no-obligation conversation.