The EU AI Act was adopted by the European Parliament today and is expected to enter into force within a few months, with its first substantive provisions taking effect before the end of 2024.
The EU AI Act applies across the AI lifecycle – from developers to deployers of AI technologies – and organisations across industries have been watching its progress closely. Now that it is finally approved, we set out below what’s next, and the key practical steps organisations should be taking now to incorporate EU AI Act compliance into their AI strategy and governance. The Act applies not only to organisations within the EU but also those based outside it who deploy AI in any EU Member State, so many multi-national companies will be caught by its provisions.
The Act takes a risk-based approach to the application of AI to different types of use, with categorisation of unacceptable (and therefore banned uses) such as facial recognition in public places, to high risk uses, such as employment uses, which are subject to conformity assessments and risk management, to limited risk, such as chatbots, and minimal risk uses, where information requirements and voluntary codes of conduct apply respectively. General purpose AI models, ie the most advanced AI systems being produced by the big tech companies, are also regulated by the Act.
What’s next?
The final text will be prepared for official publication, and will enter into force 20 days afterwards. Obligations under the Act will come into force in stages, reflecting the risk-based categorisation of AI systems:
- 6 months after entry into force – likely late 2024 – the ban on prohibited systems will take effect. This covers a limited set of use cases, deemed as posing an unacceptable risk to fundamental rights, which are prohibited and must be phased out. Prohibited use cases include use of subliminal techniques, systems that exploit a person’s vulnerabilities, biometric categorisation, social scoring, individual predictive policing, certain uses of facial or emotion recognition, and ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, except in certain limited circumstances.
- 12 months after entry into force – likely mid-2025 – obligations for general purpose AI (GPAI) governance become applicable. GPAI systems do not need to go through a pre-market conformity assessment, and the obligations imposed on them are generally less onerous than for high risk systems. However, they are subject to requirements in relation to technical documentation, policies to comply with copyright law, making available a “sufficiently detailed” summary of the content of the training dataset, and labelling AI-generated or manipulated content. GPAI systems deemed to present “systemic risk” are subject to additional requirements.
- 24 months after entry into force – likely late 2026 – the AI Act becomes generally applicable. This includes the full weight of obligations applicable to most high-risk AI systems (except those discussed below) including pre-market conformity assessment, quality and risk management systems, and post-marketing monitoring. The Act contains a detailed list of defined high-risk use cases in Annex III, which can be amended to reflect future developments and will be supplemented with practical guidance and examples from the EU Commission.
- 36 months after entry into force – likely late 2027 – the Act applies to products already required to undergo third-party conformity assessments. The AI Act takes a slightly different approach for AI systems that are a product, or are intended to be used as a safety component of a product, which is already required to undergo a third-party conformity assessment under EU product regulation. Examples of products in this category range from medical devices to toys, and the full list of relevant EU product regulation is in Annex I of the Act. Existing sector-specific regulators will retain responsibility for enforcing the Act against these products, and these products have an extra year for compliance.
- AI systems already on the market have an additional period for compliance. High-risk AI systems already on the market will only be regulated by the Act if they are subject to significant changes in their designs. GPAI systems already on the market will have an additional two years to comply.
In parallel, the EU Commission and member states will need to put in place the framework for enforcement of the Act. That includes:
- Identification, creation and staffing of relevant supervisory authorities. The EU AI Office has been created, to support the governance bodies in EU Member States and enforce the rules for GPAI models. Member states will be required to designate national supervisory authorities to enforce the ban on prohibited systems and the obligations applicable to high-risk AI systems. For a defined list of areas regulated by existing EU product safety legislation, from aviation to medical devices, responsibility for enforcement will rest with existing sector-specific regulators, who will need to ensure they have the appropriate technical expertise to take on this new regulatory burden.
- Laying down penalties. The AI Act gives Member States the power to set penalties and other enforcement measures, up to a maximum cap (depending on the obligation breached) of up to 7% of total worldwide annual turnover or EUR 35m. These will need to be determined shortly after entry into force of the Act.
- Providing guidance on compliance with the Act. The Act requires the European Commission to develop guidelines on its practical implementation within 18 months of its entry into force. Member State supervisory authorities will also need to develop guidance on regulatory expectations under the Act.
What do organisations need to do now?
If you haven’t already conducted a risk assessment to identify the impact of the EU AI Act on your business, now is the time to get started – assess your AI systems to determine whether they will be subject to the EU AI Act once it enters into force and becomes applicable, and in which risk category your AI systems will fall.
Of course, compliance with the EU AI Act will be only one part of your Responsible AI governance programme. The EU AI Act may be heralded by the EU as the first comprehensive AI law, but there are many AI related developments being introduced by lawmakers across the world and, of course, regulators are already scrutinizing organizations’ compliance with existing laws when it comes to AI (including with respect to data privacy, consumer protection, and discrimination).
Accordingly, we recommend that you:
- audit your development and use of AI within the organization and your supply chain;
- decide what your AI principles and redlines should be (likely to include ethical considerations that go beyond the law including parameters set by the EU AI Act);
- assess and augment existing risks and controls for AI where required (including to meet applicable EU AI Act requirements), both at an enterprise and product lifecycle level;
- identify relevant AI risk owners and internal governance team(s);
- revisit your existing vendor due diligence processes related to both (i) AI procurement and (ii) the procurement of third party services, products and deliverables which may be created using AI (in particular, generative AI systems);
- assess your existing contract templates and any updates required to mitigate AI risk; and
- continue to monitor AI and AI adjacent laws, guidance and standards around the world to ensure that the company’s AI governance framework is updated in response to further global developments as they arise.
Baker McKenzie has a team of dedicated experts who can help you with all aspects of EU AI Act compliance, Responsible AI governance and related policies and processes.