find-partner-btn-inner

The EU’s new AI Act – how it will affect developers and operators of AI systems

The European Union’s new Regulation laying down harmonised rules on Artificial Intelligence[1] (the Artificial Intelligence Act) has finally been adopted by EU ministers at the Telecommunications, Transport and Energy (TTE) Council meeting on 6 December 2022, following last minute changes made to its text by the Czech presidency which were distributed to members of the EU Permanent Representatives Committee on 18 November. Final co-ordination between the Parliament, the Council and the Commission could start in the beginning of 2023, with companies given a grace period, likely to be 24-36 months, to comply with the new rules.

A flagship EU initiative, the Artificial Intelligence Act will introduce the first comprehensive set of rules for artificial intelligence (AI) and is intended to ensure that AI is human-centric and trustworthy. The Artificial Intelligence Act adopts a risk-based approach to the regulation of a wide range of AI applications covering all sectors except AI systems exclusively developed for military use (e.g. lethal autonomous weapons). It will lead to a range of potentially onerous compliance for companies using AI systems, especially those classified as high-risk.

AI covered by the Artificial Intelligence Act is very broadly defined, reinforced by defined categories and use cases, with different requirements imposed depending on the level of risk.

Risk categories

1. Unacceptable risk

    Four specific use cases are deemed unacceptable:

    • Subliminal techniques to distort a person’s behaviour that may cause physical or mental harm;
    • Exploiting vulnerabilities of specific groups of persons such as the young or elderly, and persons with disabilities;
    • Social scoring leading to unjustified and disproportionate detrimental treatment; and
    • Real-time remote biometric identification in publicly accessible spaces for law enforcement purposes (except for specific actions like searching for missing persons or counterterrorism operations).

    AI systems categorised as unacceptable-risk are prohibited in the EU.

    2. High risk

      Assessing whether an AI system is high-risk requires consideration of a number of factors such as the intended purpose of the AI system; the extent the AI system has been used or is likely to be used; whether the AI system has already caused harm or created concern for harm; the potential extent of such harm; how dependent those who are harmed by an AI system are on the technology; how vulnerable those who are potentially impacted are to imbalances of power knowledge, economic or social circumstances, or age; and how easily the outcome of the system can be reversed.

      In addition, products falling under the EU product safety regulation, such as toys or medical devices, are expressly considered high-risk, as well as certain AI applications such as:

      • Biometric identification and categorization of natural persons
      • Critical infrastructure where AI could put people’s life and health at risk
      • Educational and vocational settings where the system could determine access to education or professional training
      • Recruitment and workplace management
      • Access to essential private and public services (including access to financial services such as credit scoring systems)
      • Law enforcement (e.g. including risk assessments, polygraphs, deepfake detection, and crime analytics)
      • Migration, asylum, and border control (including verifying the authenticity of travel documents)
      • The administration of justice and democratic processes

      Developers/providers of high-risk AI systems will need to carry out pre-deployment ‘conformity assessments’ to demonstrate that their systems meet all requirements in the Artificial Intelligence Act’s risk framework. The results of these conformity assessments will need to be reported to independent oversight authorities in each member state known as notified bodies, although in some cases e.g. biometric identification systems such as facial recognition, the conformity assessment must be performed by the notified body itself. Conformity assessment will also be needed whenever a high-risk AI system which has been placed on the market undergoes substantial modification.

      Developers and producers of high-risk systems will also be required to perform post-market monitoring analysis, as well as registering in an EU database.

      3. Limited – and minimal (or no) – risk

        Providers of AI systems which pose minimal (or no) risk such as video games and spam filters will not face specific regulatory requirements although certain use cases such as deepfakes, chatbots and other automated systems made for human interaction will be subject to transparency requirements and will need to make sure that consumers know they are interacting with manipulated content.

        In addition, companies and industry associations are encouraged to publish/follow voluntary ‘codes of conduct’ (which in effect flow down the high-risk requirements to other AI systems, as well as meeting certain environmental sustainability, disability justice and diversity requirements). 

        The “Brussels Effect”

        The EU often uses regulation to de facto externalise its laws to apply to businesses outside its borders but who do business with its consumers. An example is the General Data Protection Regulation (GDPR) which had a first mover, global ‘standard setting’ impact. Along with the AI Liability Directive and the revised Product Liability Directive, these new rules are part of the EU’s strategy to set the global gold standard for the regulation of AI, and in a similar manner to the GDPR, the Artificial Intelligence Act will have extra territorial effect as it will apply not just to developers and operators based in the EU but to any company which does business with consumers in the EU. Other countries are expected to take note and take steps to bring in similar legislation.

        Achieving compliance

        The Artificial Intelligence Act (when combined with the new AI Liability Directive and the revised Product Liability Directive) establishes the deployment of AI systems as an enterprise-wide risk requiring C-level attention to the development, deployment and oversight of their AI systems, especially those considered high-risk.

        Faced with very large potential fines for non-compliance[1], companies should take steps to manage their risk from the use of such systems, which may include:

        • Establishing and maintaining a comprehensive quality and risk management system and incident reporting processes and procedures;
        • Ensuring training, validation and testing data sets are subject to appropriate data governance and management practices;
        • Publishing and updating technical documentation of a high-risk AI system before it is placed on the market or put into service;
        • Incorporating logging capabilities to ensure traceability of the AI system’s functioning throughout its lifecycle;
        • Guaranteeing a certain level of transparency and providing users with relevant information (for example the characteristics, capabilities and limitations of performance of the high-risk AI system);
        • Putting in place measures to guarantee human oversight and ensuring high-risk AI systems can be overseen by natural persons while in use; and
        • Designing and developing systems in such a way that they achieve an appropriate level of accuracy, robustness and cybersecurity while performing consistently in those respects throughout their lifecycle.

        Please contact Tim Wright if you would like to discuss legal risks and issues relating to implementing and operating AI systems.

        [1] EU member states will set the level of penalties for non-compliance; however penalties of 6% of a company’s total worldwide annual revenue or €30,000,000, whichever is greater, are mooted for the worst offences.

        [1] Proposal for a Regulation laying down harmonised rules on artificial intelligence | Shaping Europe’s digital future (europa.eu)

        Featured Insights