Introduction
The European Union has proposed pioneering new rules to govern artificial intelligence (AI) systems and build trust in human-centric technology. The EU Artificial Intelligence Act (AI Act) is a landmark piece of legislation providing a comprehensive framework which will specifically regulate AI systems deployed in the EU. At its heart, the AI Act seeks to address significant risks from AI around issues of safety, transparency and human oversight, discrimination and bias, privacy erosion, and threats to EU fundamental rights. A new regulator, the AI Office, will be established to oversee the new rules.
An important feature of the AI Act is its extraterritorial effect – it will apply not just to entities within the EU but will also cover providers, developers, deployers and distributors of AI systems based outside the EU (as well as importers and authorised representatives) where the system's output occurs within the EU. However, the AI Act will not apply to AI systems used exclusively for military or defence purposes, AI systems used solely for research and innovation, or people using AI for non-professional purposes.
Approach
The AI Act takes a risk-based approach to regulating AI based on the level of risk posed by specific use cases.
Categorising AI systems
Prohibited. Certain AI systems deemed to present an unacceptable risk will be banned within the EU, such as:
- predictive policing systems based on profiling, location, or previous criminal activity;
- real-time remote biometric identification systems in publicly accessible areas;
- facial recognition databases using untargeted scraping of internet/CCTV facial images;
- emotional recognition systems in the workplace and educational institutions;
- social scoring systems, where based on social behaviour and personal characteristics; and
- manipulative systems which circumvent freewill or exploit vulnerabilities.
High risk. A fairly wide range of AI systems are deemed high risk, such as:
- AI used in critical infrastructure;
- where life and health could be at risk;
- all AI systems which undertake profiling activities;
- AI systems that are safety components of products; and
- AI systems used in recruitment and in the employment arena, or which otherwise pose a risk of harm to health and safety or an adverse impact to EU fundamental rights.
The AI Act is mostly concerned with the regulation of high risk systems.
Providers (especially developers) of high risk AI systems face a significant compliance burden. The most onerous obligations will fall to providers across the entire lifetime of their high risk AI systems.
In view of the size of potential fines, the consequences of misclassification of a high risk AI system will be significant. This is intended to function as a deterrent to companies simply claiming not to pose high risks with their AI systems.
General purpose AI. There are also specific provisions which apply to general purpose AI systems (GPAI). Providers of GPAI systems must provide technical documentation and instructions for use, put in place a policy to respect EU copyright law and publish a summary of the content used to train their models. Where distributed on a free and open source basis, the technical documentation and instructions for use requirements fall away.
However, where large GPAI models (including open source) are designated by the AI Office as GPAI models with systemic risk resulting from so-called “high impact capabilities,” providers are subject to a range of additional requirements (model evaluation, pre-market adversarial testing, post-market monitoring and incident reporting). They must also ensure cybersecurity protections and comply with codes of practice and harmonised standards. Providers must notify the European Commission within two weeks if their GPAI models meet the systemic threshold i.e. where the cumulative amount of compute used for its training is greater than 10^25 floating point operations per second (FLOPS).
Companies deploying GPAI models in their businesses should be aware that if, downstream, they make ‘significant modifications,’ for example to adapt these GPAI models to specific use cases, then they will assume the responsibilities of a provider, rather than that of a deployer, with the consequent liability and compliance responsibilities under the AI Act (although simply fine-tuning a general purpose model will be permitted).
Limited risk. This refers to AI systems with specific transparency obligations, such as basic web-based chatbots which are programmed with possible questions and tailored responses, where users should be aware that they are interacting with a machine to enable them to take an informed decision whether to continue or step back. The AI Act imposes relatively light touch transparency obligations on producers of these types of systems.
Minimal risk. The vast majority of AI systems will be treated as “minimal risk” and as such will fall outside the scope of the AI Act altogether, although existing laws and regulations such as the General Data Protection Regulation, anti-discrimination laws and copyright laws will continue to apply. Examples include AI-enabled video games and spam filters.
Timeline
The legislative progress of the AI Act has been a lengthy one.
- June 2018: European Commission (EC) launches the European AI alliance and AI expert group
- December 2018: Development of co-ordinated plan and frameworks on AI by EC
- April 2021: EC unveils its proposal for an EU Artificial Intelligence Act (AI Act)
- June 2023: European Parliament (EP) approves its version of the draft AI Act
- December 2023: EP and the Council reach provisional agreement with the EC on the AI Act
- January 2024: Unofficial versions of the consolidated text of the AI Act leaked online
- February 2024: Final text of the AI Act expected to be published in the Official Journal of the EU
- June 2024: Formal approval of the AI Act by the EP and the Council expected
Transition
A tiered timeline for compliance will apply, counting from the AI Act’s entry into force (which will take place on the 20th day after its publication in the EU Official Journal):
- the ban on prohibited AI systems will take effect after 6 months;
- the regulation of GPAI systems will kick in after 12 months (or 24 months if they are already on the market); and
- high risk systems have either 24 months or 36 months to comply depending on their use case (see Annex II and III of the Act).
High risk systems already on the market will not be covered at all unless they undergo significant changes in design after 24 months. This appears to be a significant loophole in the new regulations and goes counter to the usual principles of product safety legislation which are intended to close down gaps in safety, not allow them to continue.
What does this mean for businesses?
It will be important for businesses, whether they are deployers, developers, manufacturers, providers, importers or distributors of AI systems, particularly where these systems are considered to be high risk, to get ahead of the new regulations in view of the risk of huge financial penalties for the worst offences:
- for violations of the prohibited AI systems, €35 million or 7% of global annual turnover;
- for breach of the transparency and information requirements, €7.5 million or 1.5% of global annual turnover; and
- for violations of the AI Act’s other obligations, €15 million or 3% of global annual turnover.
More proportionate caps will apply to fines for SMEs and start-ups. Businesses would be well-advised to start preparing for the AI Act now. Just as with the GDPR, taking early moves towards meeting coming accountability requirements can ease compliance down the road. Practical steps which businesses can start to take include:
- Impact assessment: mapping, classifying and categorising AI systems used and/or under development or consideration based on risk levels in the AI Act, and evaluating when high risk AI tools may need to be retired or re-designed
- Gap analysis: performing gap assessments between current policies and the new requirements, and undertaking risk analysis of dataset biases and data governance plans, as well as analysis of the new transparency requirements
- Risk management: developing and integrating risk mitigation and risk management frameworks and systems aligned with the new regulation. Developing a code of conduct tailored to the organisation's AI applications can help facilitate a smooth implementation of the AI Act's requirements
- Documentation: determining and developing required documentation and reporting mechanisms, developing and implementing appropriate policies and procedures, and creating transparency and contestability mechanisms for AI systems
- Procurement: putting in place tools and processes which ensure that procurement teams conduct appropriate vendor selection and contracting, including developing AI-specific playbooks and contractual clauses, as well as implementing monitoring systems and incident response mechanisms
- Training: putting in place continuing education to ensure that directors and employees stay on top of risks and follow approved policies and procedures which promote the safe, transparent, fair and ethical use of AI
- Governance: establishing a strong governance framework, whether developing AI systems in-house or adopting third-party AI solutions, with top-down buy-in and engagement by leadership and other stakeholders, including data protection officers and risk managers/compliance
- International standards and guidelines: staying informed about technical standards and guidelines emerging from organisations like ISO or CEN-CENELEC, as well as AI implementation and governance frameworks, such as the NIST Risk Management Framework. These standards, guidance and frameworks will be pivotal in bridging the gap between legal requirements and technical machine learning procedures.
The field of AI policy and standards is complex and rapidly evolving. Taking a systematic, forward-looking and ethical socially engaged approach to understanding developments will equip businesses to adapt responsibly - maintaining compliance, ethics and public trust in equal measure. With expertise across IT, intellectual property, competition, sourcing, procurement, tax, cyber security, data protection, financial services, consumer protection and employment law, we provide comprehensive support on issues of AI governance and compliance as well as advising the various actors within the AI supply chain.
If you have any questions, please speak to Tim Wright or Nathan Evans.
This note is current on 31 January 2024 and is based on the recently leaked near-final text of the AI Act. However, due to the date of this note and its coming into force, it is possible that the text will be further amended.