Excuus, deze pagina is alleen beschikbaar in het Engels.

Update
12.12.2023
Artificial intelligence (AI) has rapidly transformed the way we live, work, and interact with technology.

As AI systems become more integrated into our daily lives, concerns about data protection and ethics have taken center stage. In the EU, lawmakers have responded with a web of regulations aimed at ensuring ethical AI development and safeguarding data protection, by developing the Artificial Intelligence Act (the AI Act), the Artificial Intelligence Liability Directive (the AILD), and the existing General Data Protection Regulation (GDPR). However, these legal frameworks are not fully aligned and may create some challenges and uncertainties for users, providers of AI systems, and data subjects whose personal data are processed through these systems.

In this Insight article, Danique Knibbeler and Sarah Zadeh analyse some of these gaps.

Understanding the new AI legal framework: the AI and the AILD

The EU Parliament has prioritized ensuring that all AI systems developed, distributed, and used in the EU are safe, transparent, traceable, and non-discriminatory. Regulating AI and ensuring human oversight of AI systems is part of the EU's digital strategy.[1] Within the European legislative package, the term 'AI system' takes a central role. Its legal definition can be found in the draft AI Act that was proposed in April 2021 by the EU Commission: 'Software developed using technologies such as machine learning, logic-based approaches, and statistical approaches, which are applied to a given set of human-defined objectives and generates outputs such as content, predictions, recommendations or decisions that affect the environments with which it interacts.'

AI Act risk categories

A characteristic feature of the AI Act is its so-called 'risk-based' approach, which has also become known as the pyramid structure of the AI Act. AI systems will be classified according to the degree of risk they pose to the safety of individuals or fundamental rights.

The AI Act defines four levels of risks in AI:

  1. Unacceptable risk: AI systems classified with an unacceptable risk, are systems that consider a clear threat to the safety, livelihoods, and rights of people will be banned. Examples range from social scoring by governments to toys using voice assistance that encourage dangerous behavior.
  2. High risk: AI systems with a high risk, are systems that, in the light of their intended purpose, pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence and they are used in a number of specifically pre-defined areas. Examples include resume-scanning tool that ranks job applicants based on automated algorithms, or remote biometric identification systems.
  3. Limited risk: AI systems with a limited risk, are systems that pose limited risk to individuals, and therefore only required to meet certain transparency obligations. Examples include chatbots, which means that the user must be informed that they are interacting with a machine rather than a human.
  4. Minimal or no risk: AI systems with no or minimal risks are systems that will have no additional obligations for these AI systems, such as AI-enabled video games or spam filters. The EU Commission has stated that the majority of the AI systems will fall within this category.

Due to the rise of the development and usage of generative AI, the EU Commission has included further transparency obligations in relation to generative AI, namely disclosing that the content was generated by AI, designing the model in such a manner that it will prevent it from generating illegal content, and disclose the copyrighted material within the datasets that have been used to train the models.

The EU Parliament adopted its negotiating position on the AI Act on June 14, 2023. At this stage in the legislative process, ongoing negotiations with the Council are underway to finalize the AI Act by the end of 2023.

AILD

With the AILD, the EU Commission aims to regulate compensation for damage caused intentionally or negligently by AI systems. The Commission notes that while the AI Act, once in force, will reduce the risks to security and fundamental rights, it will not stop the use and therefore the residual risk of potential damage (indirectly or directly) caused by AI, which will continue to be used in society.[2] The Commission therefore seeks to establish common rules on non-contractual civil liability for damage caused by or through the use of AI systems. The AILD has an extraterritorial effect as it applies to providers and/or users of AI systems available on or operating in the Union market. These liability rules will be essential to promote the deployment of trustworthy AI and to reap its full benefits for the internal market by ensuring that victims of harm have the same level of protection for harm caused by AI systems as for harm caused by other technologies. The AILD would create a rebuttable presumption of causality, thereby reducing the burden of proof for victims. In addition, the AILD regulates the power of national courts to order the disclosure of the 'black box' of high-risk AI systems in order to obtain evidence. While the AI Act aims to prevent harm caused by AI, the AILD aims to regulate compensation for harm caused by AI through the application of liability law.

Once adopted, both the AI Act and the AILD will be the world's first regulations on AI.[3] After becoming into force, companies that use and manufacture AI systems will have 24 months to comply. It is not yet clear when the AILD will be adopted, but member states will have two years to implement it afterward.

The interplay between the AI Act and the GDPR

Scope and applicability

One of the main differences between the AI Act and GDPR is the scope of application. The AI Act applies to providers, users, and other participants across the AI value chain (such as importers and distributors) of AI systems placed on or used in the EU market, regardless of their location. In contrast, the GDPR applies to controllers and processors that process personal data in the context of activities of an establishment of a controller or processor in the EU, or that offer goods or services to or monitor the behavior of data subjects in the EU. This means that AI systems that do not process personal data or that process personal data of non-EU data subjects can still fall under the AI Act but not under the GDPR.

In addition, the GDPR is based on the fundamental right of privacy, data subjects can exercise their rights against the parties processing their personal data. On the other hand, the AI Act focuses on AI as a product, and even though the AI Act seeks to implement a 'human-centric approach,' the AI Act regulates AI rather through the concept of product regulation. This means that individuals are indirectly protected from faulty AI systems, and do not have an explicit role in the AI Act. In other words, stopping an unlawful AI system that uses personal data is done under the AI Act, but exercising data subjects' rights in relation to their personal data is done under the GDPR.

Qualification of parties

Providers and users are assigned specific obligations in the AI Act, but most of the regulatory burden is placed on providers, especially in the context of high-risk AI systems. In the context of the GDPR, these providers will in general act as controllers in the developing phase, whilst in B2B-context they will most likely act as processors in the deployment phase. Providers and users may also qualify as joint controllers, in situations where the user and provider determine together the purpose and essential means of the processing activity with the use of the AI-system. It is also arguable that the legal obligation that providers have under the AI Act to determine the intended purpose of the AI system and the technical data collection can lead to the qualification of providers as joint controllers with their users. For joint controllership, it is not needed that they have access to the personal data that is used for the input or output of the AI system. We recommend that the European Data Protection Board (EDPB) issue guidelines to clarify the roles of providers and users as which are described in the AI Act.

Human oversight and risk management system

A third gap between the AI Act and the GDPR is the requirement for providers to implement human oversight interface tools to enable human oversight, and that measures should be taken to ensure human oversight. However, the AI Act does not specifically define which measures should be taken, and the AI Act lacks further guidance on this aspect. Human decision-makers need to be given clear instructions and training on, for example, how the AI system works, what kind of data will be used for input, what kind of result should be expected, and how to evaluate the AI system's recommendations. It would be worth considering that there is an obligation for providers to provide instructions to their users on how the AI systems work, and an obligation for users to inform and train their human decision-makers on the elements in Article 14(4) of the AI Act, so that the human decision-makers are able to make meaningful, informed decisions and avoid (automated) bias. If the human oversight is not of a certain degree, due to the human decision-makers not being trained properly, this could have the implication that the AI system will not be deemed partially automated and that thus Article 22 of the GDPR, and corresponding obligations, apply.  

Impact assessments

Pursuant to Article 35 of the GDPR, controllers are obliged to carry out a data protection impact assessment (DPIA). The AI Act explicitly refers to this obligation, by mentioning in Article 29(6) of the AI Act that users of high-risk AI systems should use the information as received from the provider pursuant to Article 13 of the AI Act, to carry out DPIAs, since a high-risk system often processes personal data.

Special categories of personal data

The AI Act notes that providers of high-risk AI systems may, to the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection, and correction in relation to the high-risk AI systems, process special categories of personal data referred to in Article 9 of the GDPR. However, the AI Act ( also explicitly states that it does not provide a legal ground for the processing of special categories of personal data. The lawful processing ground might be legitimate interest (Article 6(1)(f) of the GDPR) to avoid biased data and discrimination. However, when processing special categories of data, an exemption as included in Article 9(2) of the GDPR must be met, and therefore legitimate interest as such is not enough. Due to the amount of personal data necessary for training purposes, we do not believe that obtaining explicit consent from data subjects for these purposes is advisable or even feasible. Article 10(5) of the AI Act could potentially serve as an exemption pursuant to Article 9(2)(g) of the GDPR, namely the substantial public interest to prevent discrimination and biased AI systems. However, as part of Article 9(2)(g) of the GDPR, the provider must take suitable and specific measures to safeguard the fundamental rights and the interests of the data subject, which can be challenging when large amounts of special category data will be processed.

Competent authorities

As mentioned in paragraph 5.2.6 of the explanatory memorandum of the AI Act, the AI Act will establish a European Artificial Intelligence Board, that aims to facilitate a smooth, effective, and harmonized implementation of the AI Act by contributing to the cooperation of the national supervisory authorities and the EU Commission, and providing advice and expertise to the EU Commission, whilst the EDPB will act as the competent authority for the supervision of the EU institutions, agencies, and bodies when they fall within the scope of the AI Act.

At the national level, EU Member States will be obliged to designate one or more national competent authorities and, among them, the national supervisory authority, for the purpose of supervising the application and implementation of the AI Act, whilst pursuant to Article 63(4) of the AI Act and paragraph 1.2 of the explanatory memorandum of the AI Act, if AI systems are provided or used by regulated credit institutions, the authorities responsible for the supervision of financial services legislation should be designated as competent authorities for supervising the requirements in the AI Act to ensure coherent enforcement of the obligations under the AI Act and the financial services legislation.

Establishing and/or designating one or more competent authorities can pose challenges. The AI Act, being a horizontal framework without a specific sector focus, introduces the potential that multiple authorities may come into play due to sector-specific laws and regulations, such as financial regulators in relation to, for example, the use of AI in lending or loan management, or healthcare regulators in relation to, for example, the use of AI in medical devices. To increase proper coordination and to provide clear and concise guidance to both providers and users, the competence of and interplay between the different relevant authorities and their guidance should be further clarified.

Conclusion and recommendations

In summary, the EU legal framework for AI and the GDPR is complex and evolving, with some gaps and overlaps between different instruments. The AI Act and GDPR have different scopes, definitions, and requirements, which can create challenges for compliance and consistency.

Further guidance from, for example, the EDPB, would be desirable, particularly in relation to the use and interpretation of terminology and (data protection impact) assessment methods, as well as collaboration and coordination between relevant authorities and their guidance to ensure consistent enforcement and interpretation.

In order to navigate the interplay and potential gaps between the GDPR and the AI Act, we would recommend identifying whether you are subject to the AI Act and the GDPR, or other additional (sector-specific) regulations; map out the potential overlap; conduct or update your risk assessments accordingly, including the DPIAs and any necessary ethical assessments; and update policies, notices, documentation, and technical and organizational measures where necessary pursuant to the outcome and the impact of the assessment. Such an approach is not just mandatory at the beginning of the development or deployment of AI systems, but during the whole lifecycle of the AI systems, to ensure trustworthy, ethical, and secure AI.

Cookie notificatie

Deze website maakt gebruik van cookies en daarmee vergelijkbare technieken om een optimale gebruikerservaring te bieden. Je kunt je voorkeuren aanpassen of meer informatie bekijken.
Deze cookies zorgen ervoor dat de website naar behoren werkt. Deze cookies kunnen niet uitgezet worden.
Deze cookies kunnen geplaatst worden door derde partijen, zoals YouTube of Vimeo.
Door categorieën uit te zetten, kan het voorkomen dat gerelateerde functionaliteiten binnen de website niet langer correct werken. Het is altijd mogelijk om op een later moment de voorkeuren aan te passen. Bekijk meer informatie.