Update
11.12.2023
The European Parliament and the Council have reached a provisional agreement on the Artificial Intelligence Act (AI Act), establishing a ground-breaking regulation regarding the use and development of AI. This update provides three key insights for those interested in the world's first comprehensive AI law, which can set a global standard for AI regulation.
  • #1. Landmark European provisional agreement aims to regulate the use and development of AI through a 'risk-based' approach

    The EU AI Act sets out obligations for providers and users depending on the level of risk posed by artificial intelligence, classifying AI systems into limited risk, high-risk, and unacceptable risk categories. AI applications that pose limited risk will be subject to light transparency obligations. AI systems deemed high-risk should comply with strict obligations, such as carrying mandatory fundamental rights impact assessments; creating a robust cybersecurity framework; and implementing human oversight. AI systems that fall within the 'unacceptable risk'-category will be prohibited, including biometric categorisation systems that use sensitive characteristics, such as political beliefs, sexual orientation and race; and social scoring based on social behaviour or personal characteristics.

  • #2. Specific requirements agreed on for general purpose AI and remote biometric identification systems

    In addition to the requirements and prohibitions applicable to the limited risk, high-risk and unacceptable risk AI systems, the AI Act will impose specific requirements for the provision of general purpose AI systems (also known as foundation models) and remote Biometric identification systems.

    General purpose AI (GPAI) models and systems - Due to the rapid developments in the field of GPAI, the AI Act introduces a two-tiered approach, namely GPAI with or without 'systemic risk'. The categorisation depends on the computing power of the GPAI, referred to as floating-point operations per second or ‘FLOPS’ and used to measure the computational complexity of training and running AI models. While all providers of GPAI systems will have to comply with transparency obligations, such as providing technical documentation; providing details about the training data; and complying with European copyright laws, providers of GPAI systems with 'systemic risk' will have to comply with additional requirements, such as implementing appropriate cybersecurity measures, but also reporting obligations on energy efficiency and in the event of serious incidents.

    Remote biometric identification (RBI) systems - Following discussions initiated by several Member States on the use of RBI systems for national security, the initial ban on RBI systems has been dropped, and narrow exceptions for the use of RBI systems for law enforcement purposes in public spaces have been included in the AI Act. However, the use of RBI systems in such context will be subject to judicial authorisation and additional requirements. 

  • #3. Governance framework will oversee the implementation of new rules and ensure coordination at European level

    The AI Act will introduce a governance framework by establishing a European 'AI Office' that aims to coordinate and enforce the AI Act at European level. National competent market surveillance authorities will supervise the implementation of the AI Act at a national level. The AI Act will also include measures specifically intended to support innovation, such as regulatory sandboxes and real-world testing. 

    Furthermore, the AI Act includes the possibility to impose fines for infringements of the AI Act, set as a percentage of the global annual turnover of the infringing company in the preceding financial year, or a pre-determined amount, whichever is higher. This will be (i) EUR 35 million or 7%; (ii) EUR 15 million or 3%; or (iii) EUR 7,5 million or 1,5%, depending on the infringement and the size of the company. However, the AI Act will provide the possibility to implement proportionate caps on administrative fines for small-and-medium-sized enterprises and start-ups. Such measures are intended to prevent the stifling of innovation and aim to create a proportionate governance framework.

Related articles

Cookie notification

This functionality uses third-party cookies. Change your cookie preferences to view this content or view more information.
These cookies ensure that the website works properly. These cookies cannot be disabled.
These cookies can be placed by third parties, such as YouTube or Vimeo.
By deactivating categories, it is possible that related functionalities within the website may no longer work properly. It is always possible to change your preferences at a later time. View more information.