find-partner-btn-inner

AI Round-Up – July 2024

London Tech Week 2024 saw more than 25,000 visitors over 10-12 June at the event’s new venue at Olympia in Kensington, and with a large number of fringe events taking place across London throughout the week, event organisers estimate up to 45,000 in total. Not surprisingly, AI took centre stage at #LTW24, with dedicated sessions, keynote speeches and panel discussions exploring the latest advancements, challenges and opportunities in the field. Fladgate’s AI team attended the event alongside industry leaders and experts, exhibitors and sponsors, and took part in thought-provoking conversations on topics such as AI ethics and data governance and the real-world impacts of AI technologies, as well as navigating the growing framework of AI laws and regulations, codes and guidance.

We also saw exciting breakthroughs such as DeepMind's announcement that its AlphaFold 2 had achieved a significant milestone in solving the long-standing challenge of protein structure prediction, potentially accelerating drug discovery and disease research, whilst researchers at the University of Michigan are using AI to decode “dog vocalisations” – developing tools to identify whether your canine friend’s bark conveys aggression or playfulness. OpenAI unveiled GPT-4o, its new flagship large language model (the “o” standing for omni – a reference to the model’s multiple modalities for text, vision and audio), with the aim of turning ChatGPT into a digital personal assistant that can engage in real-time, spoken conversation.

In other news…

Use of AI in government tenders

The UK government published a Procurement Policy Note (PPN 02/24) providing guidance to contracting authorities on the use of AI in tenders. The PPN aims to address risks associated with AI systems potentially accessing and using confidential information during tender creation. In particular, it requires suppliers to disclose their use of AI in tender responses; where AI is involved, contracting authorities must then conduct additional due diligence, such as requesting supporting documents, to ensure the supplier's capability to fulfil contract requirements and the accuracy of tender information. For tenders involving national security, additional risk mitigation measures may be implemented. Suppliers may also be required to provide declarations and details if AI will be used in service delivery, allowing contracting authorities to assess potential risks and ensure appropriate safeguards are in place.

EU’s dedicated ChatGPT task force reports

The European Data Protection Board's dedicated ChatGPT taskforce, established to co-ordinate actions and enforcement among EU data protection authorities related to ChatGPT across EU member states, released a report. Not surprisingly, the report highlighted the need for AI providers to implement robust data protection safeguards, conduct data protection impact assessments, and ensure adequate user information and control over personal data processing. The report also signalled the taskforce's intention to undertake further work to develop a comprehensive policy framework for ensuring AI systems like ChatGPT comply with the EU's data protection laws and GDPR requirements. This framework aims to provide clear guidelines and standards for AI providers operating in the EU, ensuring that the development and deployment of AI technologies align with the region's strict data protection and privacy regulations.

Advertising Standards Authority makes progress using AI to regulate online advertising

In its recently published 2023 annual report, the UK regulator revealed the progress in leveraging AI capabilities to enhance the regulation of online advertising. The ASA says it has implemented an AI-based monitoring system that enables it to not only take down problematic advertisements, but also report on areas of high compliance after its interventions. This AI-assisted approach has proven to be a game-changer in the effective regulation of online advertising. Building on this success, the ASA plans to significantly scale up its AI-assisted ad monitoring over the next five years through its "AI-Assisted Collective Ad Regulation" strategy. This ambitious initiative aims to cover an even broader range of online advertisements across various platforms, further strengthening the ASA's ability to ensure compliance with advertising standards and protect consumers from misleading or harmful content.

MiFID considerations for investment firms’ use of AI

The European Securities and Markets Authority (ESMA) published a statement providing guidance to investment firms on the use of AI systems and relevant considerations under MiFID II (Markets in Financial Instruments Directive). The statement outlines ESMA's expectation that firms must comply with relevant MiFID II requirements when using AI, particularly around organisational aspects, governance and oversight, conduct of business rules, and the obligation to act in the best interests of clients. ESMA highlighted the inherent risks associated with AI use, including algorithmic biases, data quality issues, opaque decision-making, over-reliance on AI, and privacy/security concerns related to data processing. The statement emphasises the need for investment firms to implement robust governance frameworks, conduct thorough risk assessments, and ensure transparency and accountability when deploying AI systems in their operations and decision-making processes.

AI Act still not published

The AI Act will become binding law 20 days after it is published in the Official Journal; because of the publication delay, this is now expected to be in early August. Despite the short delay, businesses should not slow down their preparations for the Act which introduces significant new regulations. Companies will need to invest considerable time and resources to ensure compliance.

Italy progresses its own AI Bill

The Italian government has made progress in developing its own regulatory framework for AI with the Council of Ministers approving a draft bill in April to regulate AI in Italy. While not yet a law, this government proposal will undergo discussions and amendments in the Parliament before final approval. The AI Bill aims to regulate AI technologies to minimise risks to citizens and is designed to complement, rather than overlap with, the EU's AI Act. If enacted, the Bill will amend the Italian Copyright Law regarding AI-generated works, require audiovisual and radio content using AI to be clearly identified as such for users, and oblige online platforms to protect users from AI-generated misinformation presented as facts.

EU legislators turn their attention to AI factories

The AI Factories Act, which focuses on the use of supercomputing in AI development, was adopted by the Council of the European Union on 17 June 2024. The regulation, which will come into effect 20 days after publication in the Official Journal, defines AI factories as entities that develop and train large AI models using supercomputers, and aims to ensure that AI factories operating in the EU use European supercomputers for a significant portion of their AI model training. The regulation will apply to AI factories that develop foundation models with over 10^25 FLOPs (floating-point operations) of computing power, and will require AI factories to use European high-performance computing (HPC) and data infrastructure for at least 20% of their total computing capacity needs, as part of the EU's broader efforts to strengthen its position in AI development and ensure that European infrastructure plays a significant role in the advancement of AI technologies.

New alliance launched to focus on responsible generative AI

Finally, the World Economic Forum launched the AI Governance Alliance, a dedicated initiative focused on responsible generative AI. This initiative expands on the existing framework and builds on the recommendations from the Responsible AI Leadership: A Global Summit on Generative AI. The initiative will prioritise three main areas: ensuring safe systems and technologies; promoting sustainable applications and transformation; and contributing to resilient governance and regulation, and will provide guidance on the responsible design, development and deployment of AI systems.

We will now take a short break for the summer but will be back later in the year with more of the action in this fast moving field. As ever, if you would like to discuss anything in this article, please get in touch with Tim Wright or Nathan Evans.

Featured Insights