It has been another eventful month in the world of artificial intelligence! As the EU gave its final stamp of approval to the ground-breaking AI Act, businesses are facing the need to gear up for a new era of regulation aimed at reining in the risks of advanced AI systems. But that's not all - the upcoming UK general election could shake things up even further, with a potential Labour government promising to take a tougher stance on AI.
In this update, we dive into some recent developments shaping the future of AI governance.
EU AI Act receives final approval
The landmark EU Artificial Intelligence Act (AI Act) received final approval from the Council of the European Union on 21 May 2024. This comprehensive and first-of-its-kind legislation aims to regulate general-purpose AI models, impose obligations on developers and users of high-risk AI systems, and prohibit certain AI systems deemed to bring an unacceptable risk. The AI Act will come into effect on the 20th day after its publication in the official journal, although its implementation will be phased. With its extraterritorial scope, the AI Act will apply to any company developing, deploying or using AI systems that operate within the EU or whose AI outputs are used in the EU, regardless of where the company is based.
AI Liability Directive held up at Committee stage
First proposed in September 2022, the EU’s Artificial Intelligence Liability Directive (AILD) aims to adapt non-contractual civil liability rules to AI, and goes hand in hand with changes to the Product Liability Directive (PLD), a 40-year-old directive that assigns responsibilities and penalties for defective products. A new version of the PLD, which was adopted by the European Parliament in March 2024, explicitly includes software such as AI chatbots, as well as damage to psychological health, and damage resulting from the destruction or irreversible corruption of data. The AILD also passed the March Parliament vote; however, the directive is currently at Committee Level so it remains to be seen when, and in what state, it finally appears in the EU’s rulebook, with some commentators arguing that there is no gap needed to be filled between the AI Act and the PLD.
EHRC updates approach to workplace regulation
The UK’s Equality and Human Rights Commission (EHRC) in the UK released an updated approach to regulating the use of AI in the workplace, aiming to mitigate the risks of discrimination and promote fairness and equality. Key principles highlighted by the EHRC include transparency and explainability; fairness and non-discrimination; data quality and governance; and data protection and privacy. The EHRC has made it clear that it will take enforcement action against employers who fail to meet their legal obligations under the Equality Act 2010 when using AI systems.
AI-generated works and copyright
In a landmark ruling, the Municipal Court in Prague held that AI-generated works cannot be protected under Czech copyright law, finding that "copyright is an absolute right belonging to an individual. If the image in question was not created personally by the applicant, but by an artificial intelligence, it cannot, by definition, be a copyrighted work." This ruling is reportedly the first time a European court has addressed the issue of AI and copyright. While the decision is specific to Czech law, it could potentially influence how other EU member states interpret and apply copyright laws in relation to AI-generated works.
Transatlantic collaboration for AI safety
The UK and US unveiled a new partnership at the start of April aiming to foster international co-operation in ensuring the responsible development and deployment of advanced AI systems. The two nations plan to join forces to develop rigorous testing frameworks for cutting-edge AI models, adopting a collaborative approach which seeks to identify potential risks and vulnerabilities, and enabling proactive measures to mitigate them, as well as sharing critical information and insights regarding the capabilities and associated risks of AI models and systems.
Global AI safety institutes
In a separate initiative, the AI Summit held in London last year has culminated in a ground-breaking proposal to establish a network of global AI safety institutes, with summit participants unanimously agreeing on the network’s creation in order to serve as international hubs for research, knowledge-sharing and capacity-building.
UK signals potential shift towards stricter AI regulation
There were also hints of high-level discussions underway within the UK government to explore the possibility of introducing more stringent regulations for AI development and deployment. Sources said that the discussions were influenced by the growing global momentum towards stronger AI governance, exemplified by the recently approved EU AI Act. However, Rishi Sunak’s decision to go to the country on 4 July means these discussions are on the back burner for the time being.
Data Protection and Digital Information Bill falls
Another consequence of the general election is that The Data Protection and Digital Information Bill did not pass into law and will need to be reintroduced in the next Parliament. However, there is speculation that a Labour government might look to introduce a new digital and AI bill on entirely different lines. Based on prior announcements, a new Labour government is expected to pursue a more active AI regulatory agenda with binding rules, a new regulatory body, emphasis on public safety/trust, responsible AI development guidelines, and to generally take a more urgent, and less laissez faire, approach.
The breakneck speed of AI innovation, coupled with the intricate web of emerging regulations and the profound societal and economic ramifications of these technologies, underscores the paramount importance for all stakeholders to remain vigilant and closely track the ever-evolving regulatory landscape.
If you would like to discuss how this affects your business, please get in touch with Tim Wright or Nathan Evans.