The Value Chain in the EU AI Act: Who Has Which Responsibilities?
The EU AI Act examines the entire lifecycle of an AI system and imposes obligations on various actors in the value chain. But who exactly are these actors, and what responsibilities do they have? In this blog, we dive deep into the different roles that the EU AI Act defines and what this means for companies, users, and other stakeholders. The AI Act aims to ensure that AI technology is deployed in a safe, ethical, and responsible manner. This requires clear rules and good cooperation between all actors involved in the development, distribution, and use of AI.
The Value Chain Explained
The EU AI Act considers AI systems from the moment of design until their actual use in the market. This means that not only providers (such as developers and manufacturers) are responsible for complying with the legislation, but also other actors such as users, importers, distributors, authorized representatives, and even affected persons. Below, we discuss the main actors in the value chain and their responsibilities, making clear what role each party plays in ensuring safe and reliable AI.
1. Providers: The Key Figures
Providers are the actors who develop AI systems and place them on the EU market. Whether established inside or outside the EU, the responsibilities for providers are extensive. They must comply with the requirements for high-risk AI systems, such as setting up a risk management system, preparing technical documentation, and a declaration of conformity. Additionally, providers must apply CE marking and ensure registration in the EU database where applicable. The risk management system includes identifying, evaluating, and mitigating risks throughout the entire lifecycle of the AI system. This requires continuous effort to guarantee the safety of the systems.
Providers are also responsible for monitoring the system after it has been placed on the market, known as 'post-market monitoring'. This means they must record and report incidents to the competent authorities. The aim is to ensure that any problems that only come to light after the system's introduction are addressed quickly and effectively. Moreover, providers must cooperate with other actors in the value chain, such as users and authorized representatives, to ensure compliance with regulations.
2. Users: More Than Just Consumers
Users of AI systems are not merely passive consumers; according to the EU AI Act, they also have various responsibilities. These include implementing human oversight, monitoring system operation, and reporting incidents to the provider. Users must ensure they use AI systems as described in the instructions provided by the provider. This means they need to be aware of the correct way to use the system and its limitations.
For government authorities, there is an additional requirement to register AI systems in the EU database. This enables better oversight of AI use in the public sector. Additionally, users must ensure sufficient human oversight is in place to evaluate system operation and intervene when necessary. This is particularly important for high-risk AI systems that can affect individual rights and freedoms.
Users also have the responsibility to inform people who interact with AI systems, such as customers or employees, about the use of these systems. This increases transparency and ensures individuals are aware of AI deployment, especially when their rights or freedoms might be affected. An example is the use of AI systems in decision-making processes that impact individuals, such as job application procedures or credit lending.
3. Importers and Distributors: Responsible for Conformity
Importers and distributors play an important role in the value chain, as they must ensure AI systems comply with EU AI Act requirements before being placed on the market. They are obligated to verify conformity, ensure proper documentation, and report any non-conformities. Importers must verify that the provider has complied with AI Act requirements and that the necessary documentation is available. Distributors are responsible for ensuring the product is stored and transported properly to maintain the safety and conformity of the AI system.
Importers and distributors must also take necessary measures when they believe an AI system does not meet requirements. This may include not placing the system on the market or informing competent authorities about potential risks. In this way, they contribute to ensuring the safety and reliability of AI systems offered on the market.
4. Manufacturers and Authorized Representatives
When an AI system is part of a product, manufacturers assume the provider's responsibilities. This means they must ensure the AI system complies with EU AI Act requirements before the product is placed on the market. Authorized representatives, acting on behalf of providers established outside the EU, must ensure compliance with AI Act obligations. They serve as the point of contact within the EU and are responsible for communicating with authorities and taking corrective measures when necessary.
5. Third Parties and Affected Persons
Third parties, such as component and data suppliers, are obligated to assist providers in complying with regulations. This means they play a crucial role in delivering safe and reliable components and datasets used in AI systems. If third parties fail to meet their obligations, this can affect the entire value chain and compromise the AI system's conformity.
Affected persons, on the other hand, have the right to explanation regarding decision-making based on high-risk AI systems. This right is important because it ensures individuals gain insight into how decisions are made and what factors play a role. This contributes to the transparency and accountability of AI-based decision-making processes and allows individuals to object or request further explanation if they feel disadvantaged.
Why Is This Important?
The EU AI Act places primary responsibility on the provider of the AI system but also emphasizes the importance of cooperation between all actors in the value chain. Transparency, compliance with the law, and monitoring are essential here. By placing obligations on different actors, the legislation ensures that AI systems are not only safe and reliable but also deployed responsibly. Sharing responsibilities ensures that each actor plays their role in minimizing risks and increasing trust in AI technology.
The cooperation between providers, users, importers, distributors, and other actors is crucial to ensure AI systems meet the established standards and that incidents are quickly addressed and resolved. Transparency is a key concept here: it is important that all involved parties have insight into the system's operation and potential risks so they can take appropriate measures to manage these risks.
Cooperation and Transparency: The Key to Success
The EU AI Act introduces a system of responsibilities designed to make AI systems safe and ethical. The cooperation between providers, users, importers, and other actors is crucial for the success of this regulation. Transparency about the operation and risks of AI systems is essential to ensure trust in AI technology. By openly communicating about AI's operation and the possible consequences of its use, all stakeholders can contribute to responsible AI deployment.
The success of the AI Act heavily depends on actors' willingness to cooperate and exchange information. For providers, this means being transparent about their systems' operation and being open to scrutiny by other actors and competent authorities. For users, this means being aware of AI systems' limitations and deploying these systems responsibly.
Conclusion
The EU AI Act creates a complex but necessary system of responsibilities for all involved actors in the value chain. The goal is clear: AI systems must not only be innovative but also safe, reliable, and ethically responsible. Only by working together and ensuring transparency can we shape the future of AI in Europe responsibly. The legislation emphasizes that AI's success in Europe depends not only on the technology itself but primarily on how we develop, deploy, and regulate this technology.
Curious about how the AI Act might affect your organization? Or want to know how to comply with the new regulations? Contact us for personalized advice and discover how to align your AI projects with EU AI Act requirements.
Test Your Knowledge 🎯
Now that you know everything about the AI value chain under the EU AI Act, are you ready to test your knowledge?