Publication
21.05.2025
As from 2 February 2025, the first provisions under the AI Act have entered into application, more particularly, on prohibited AI practices and the obligation to ensure a sufficient level of AI literacy of the staff of user entities and other persons dealing with the operation and use of AI systems, also known as “AI literacy”. The European Commission has recently issued several publications providing key guidance on how entities are to achieve it ahead of the supervision and enforcement rules applying from 3 August 2026 onwards.
  • Survey highlights from the AI Office

    On 28 March 2025, the AI Office, an internal office of the European Commission based in Brussels and Luxembourg, has published a “Living Repository of AI Literacy Practices” containing the result of a survey conducted among the AI Pact pledgeors (including amongst others Adobe, Booking, Capgemeni, Generali, …). The results are presented as examples of practices aiming to achieve AI literacy, with the disclaimer that the practices included therein are not sanctioned or presumed to comply with the AI Act, and should only serve as inspiration for other stakeholders.

    The main approach stemming out across the responses is the reliance on mandatory (e-)learning modules tailored to the department or service into question and the circumstances in which the tool is deployed. When comparing with the practice taken within the European Commission.

  • Strategic support measures and enforcement roadmap

    On 9 April 2025, the European Commission published the AI Continent Action Plan setting out key areas of focus for EU AI policy split between taking advantage of research and high skill labour in the EU through appropriate financial resources, ensuring a thriving startup and scaleup scene whilst ensuring easier access to high computational resources, but, last but not least, enforcing the AI Act. On this point, the European Commission recognises the crucial phase that represents its initial implementation and focuses its policy on streamlining compliance burdens. Concretely, an AI Act Service Desk will be launched to act as a central information hub on the AI Act “allowing stakeholders to ask for help and receive tailor-made answers” in the form of “practical advice”. The European Commission will also focus on providing “templates, guidance, webinars and training courses to streamline procedures and facilitate compliance” and invites stakeholders to take advantage of the future regulatory sandboxes, which are estimated to be operational by April 2026.

    Although these policies do respond to stakeholders’ concerns and criticisms regarding the adoption of the AI Act, this policy is not focussing on removing burdens on them, but rather helping them managing these new obligations they will need to be facing. This also highlights the thin line that the European Commission and other regulators are walking on when aiming to strike a balance between their advisory role in promoting compliance practices whilst reserving their possibility to effectively supervise and – if needed – take corrective measures on stakeholders which do not comply with these rules which, as stressed in the plan, were adopted to safeguard EU values of democracy and cultural diversity. This being said, the regulatory efforts undertaken by stakeholders should make use of these tools and sources made available to demonstrate their good faith will to comply with regulatory requirements.

  • Clarifications from the latest Q&A on AI literacy

    On 7 May 2025, the European Commission published a Q&A on the AI literacy requirements under the AI Act. Key takeaways of this Q&A are notably that its scope is not limited to employees of providers and deployers, but also extends to “persons broadly under the organisational remit” citing as examples contractors, service providers and – most importantly – clients of the provider/deployer. The question, however, remains as to how such literacy should look like for end-users, especially consumers and how this should be distinct from the transparency requirements provided in the AI Act.

    Also in terms of scope, the Q&A confirms that staff using off-the-shelf AI tools such as ChatGPT in their workflow will trigger the AI literacy obligations, implying more broadly that such use will fall within the scope of the AI Act. Stakeholders must therefore ensure that they are aware of the AI uses, including unsanctioned ones by their employees, in order to map out their obligations under the AI Act, and potentially safeguard themselves in relation to the personal data or confidential client data they are processing (especially in light of their professional secrecy obligations).

    The Q&A also hints at the content of the training needing to cover, amongst others, the different types of AI, the definition of a general purpose AI (GPAI), the concrete AI use cases within the organisation, the opportunities and risks of said AI use cases (e.g. hallucination), and any practical information that the persons may need to safely operate, deploy or otherwise interact with the AI. The approach should be based on risk and on the role of the entity as either a provider or a deployer.

    The Q&A does, however, clarify that the AI literacy obligation does not extend to governance requirements, meaning that these should be read from the other provisions of the AI Act entering into application at a later date. Furthermore, testing the persons on their AI literacy is not per se required, but it may amount to an additional safeguard to ensure the level of AI literacy in case of e.g. an inspection by the relevant sectorial authority. From an accountability standpoint, internal records of attendance are deemed sufficient.

  • Download the pdf

    This article was published in the May 2025 edition of Agefi Luxembourg.

Cookie notification

This functionality uses third-party cookies. Change your cookie preferences to view this content or view more information.
These cookies ensure that the website works properly. These cookies cannot be disabled.
These cookies can be placed by third parties, such as YouTube or Vimeo.
By deactivating categories, it is possible that related functionalities within the website may no longer work properly. It is always possible to change your preferences at a later time. View more information.