find-partner-btn-inner

AI Round-up - September 2024

As we settle back into our desks this September, refreshed from our summer adventures, the world of AI has been buzzing with activity. This month’s round-up highlights the key developments, regulatory updates and industry shifts that you may have missed while sunning yourselves on the beach. So, let’s shake off the sand and dive into the latest happenings in the ever-evolving realm of artificial intelligence.

EU AI Act implementation

One of the most notable regulatory developments was the European Union's AI Act, which came into force on 1 August 2024. This landmark legislation aims to establish harmonised rules for artificial intelligence across the EU. The Act, which will require businesses to conduct AI mapping exercises to identify AI systems and models in use, assess compliance and, in particular, prepare for requirements related to high-risk AI systems, including implementing risk management systems, ensuring transparency and maintaining technical documentation, will be implemented in phases:

  • 2 February 2025 - Prohibition on AI systems presenting unacceptable risks. These include social scoring and certain biometric categorisation, biometric identification and facial recognition.
  • 2 August 2025 - Rules for general-purpose AI models.
  • 2 August 2026 - General applicability of the EU AI Act's provisions, including requirements on high-risk AI systems in Annex III.
  • 2 August 2027 - Applicability of the rules to high-risk AI systems listed in Annex I (e.g. toys, medical devices, civil aviation safety).

The Commission has also established an AI Office to oversee the Act's implementation and serve as a central point of contact for regulators and other stakeholders.


European Commission workshop on competition in virtual worlds and generative AI

At the end of June, the European Commission held a workshop on competition in virtual worlds and generative AI which featured prominent speakers including Margrethe Vestager, Olivier Guersent and Benoît Cœuré, alongside panel discussions with industry experts, academics, and EU competition officials. Key takeaways include:


  • Ongoing monitoring: the EC will continue to monitor market concentration, anticompetitive behaviour and relationships between large tech companies and start-ups active in these sectors, as well as monitoring distribution channels.
  • Importance of access to key inputs: participants discussed the importance of access to key inputs and essential resources for the development of both virtual world and AI technologies, and discussed the number of ways to potentially limit barriers to entry including public investment.
  • Adaptability of competition law tools: applicable competition tools are fit for purpose but should remain adaptable and innovative in order to address emerging issues.


Call for tenders: energy efficiency and reduced carbon footprint of AI technologies

The Commission also launched a call for tenders, closing 23 September, for a study on how to foster energy efficiency and limit the carbon footprint of AI technologies. The tender is structured along four work packages, corresponding to the four main objectives of the AI Act:


  • Explore the current and estimated future carbon footprint of AI systems.
  • Develop a measurement framework which could serve as a basis to address the energy-related objectives of the AI Act and also to develop a potential AI energy and emission label.
  • Identify the suitable governance and implementation model for the measurement framework.
  • Identify and promote energy-efficient and low emission AI best practices through an online repository.


EU set to sign Council of Europe's AI Convention

Recent reports indicate that, following something of an internal squabble between the European Commission and the 27 member states, the member states have given the Commission the mandate to sign the world's first AI treaty on behalf of the bloc on 5 September in Vilnius. The treaty, which is also open to non-European countries, sets out a legal framework that covers the entire lifecycle of AI systems and addresses the risks they may pose, while promoting responsible innovation, adopting a risk-based approach to the design, development, use and decommissioning of AI systems, which requires carefully considering any potential negative consequences of using AI systems.

The convention is the outcome of two years' work by an intergovernmental body, the Committee on Artificial Intelligence, which involved the 46 Council of Europe member states, the EU and 11 non-member states (Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the USA and Uruguay), as well as representatives of the private sector, civil society and academia, who participated as observers.


King’s Speech omits new UK AI Bill

At home, the King's Speech, delivered on 17 July, addressed AI regulation in the UK, but did not include as much detail as some may have expected, including a much touted UK AI Bill. While the speech did not provide comprehensive details on AI regulation, it did signal an intention to address the issue, with an approach which appears to be more focused on regulating the most powerful AI models rather than implementing broad, sweeping regulations, and the announcement of two other relevant bills – a Digital and Smart Data Bill aimed at promoting innovative use of data to boost the economy, and a Cyber Security and Resilience Bill focused on protecting critical national infrastructure.


DSIT expansion announced

The Department for Science, Innovation and Technology (DSIT) has announced a significant expansion under the new Labour government in the UK. This move aims to consolidate and streamline efforts in digital transformation across public services, bringing together various digital transformation initiatives under a single departmental umbrella, with new primary goals to include upskilling civil servants in the use of digital tools and AI in their frontline work. This expansion positions DSIT as the digital centre of the UK’s new government, working closely with the Cabinet Office and the Treasury.

Separately DSIT recently closed a call for evidence seeking Views on the Cyber Security of AI. Announcing the Call for Views in May this year, Viscount Camrose, the former Minister for AI and Intellectual Property, said that the government is proposing a two-part intervention on AI in the form of a voluntary AI Cyber Security Code of Practice that will be taken into a global standards development organisation for further development and sets baseline security requirements for stakeholders in the AI supply chain.


Mumsnet launches first British legal action against OpenAI

Parenting website Mumsnet started what is reported to be the first British legal action against OpenAI, accusing the company of breaching its copyright by scraping six billion words from its website to help build its ChatGPT chatbot. In a thread on their website, Mumsnet said that when they became aware of this, they approached OpenAI and suggested they might like to licence their content but OpenAI declined to engage with them. Others suing OpenAI (and Microsoft) for copyright breach include non-profit news organisation The Center for Investigative Reporting, and eight daily newspapers in the US owned by Alden Global Capital, as well as the New York Times.


Illinois enacts legislation regulating employers’ use of AI

Across the pond, Illinois became the second US State to bring in rules regulating employers’ use of AI. Following Colorado’s footsteps, where a similar law was passed in May, Illinois passed HB3773 which amends the Illinois Human Rights Act and will regulate employers' use of AI by making it a civil rights violation to use AI that discriminates based on protected classes or uses zip codes as proxies for discrimination. The law also requires employers to notify employees when AI is used in employment decisions.


Ai4 conference goes from strength to strength

In August, the Ai4 2024 conference in Las Vegas had over 5,000 attendees, with speakers from over 350 companies including AWS, Vanguard, Takeda and Wells Fargo. What has become North America’s largest AI industry event, served as a hub for discussing the latest advancements in AI, including ethics, innovation and the social impact of AI technologies, and provided a platform for networking among government organisations, investors and start-ups, fostering collaboration and knowledge exchange. 


Klarna cuts jobs citing AI efficiencies 

Recently, ‘buy now, pay later’ lender Klarna announced a significant reduction in its workforce, citing plans to reduce its employee count by half through efficiencies it says arise out of its investment in AI, with a focus on marketing and customer service. Executives at other companies facing uncertain economic headwinds and looking to cut costs and implement efficiencies will certainly take note.


AI investment reaches new heights

Finally, we note that the AI sector continues to attract significant investment. According to CB Insight’s July State of AI Q2’24 report, global AI funding hit $23.2 billion in the second quarter of 2024, the highest quarterly level on record, even exceeding the heights seen during the ‘21 venture boom. Notable funding rounds in July and August include:


  • xAI - $6B Series B at a $24B valuation 
  • G42 - $1.5B investment from Microsoft
  • Coreweave - $1.1B Series C at a $19B valuation
  • Wayve - $1.05B Series C from Softbank, Microsoft and Nvidia
  • Scale - $1B Series F at a £13.8B valuation


The AI revolution continues to unfold at an unprecedented pace, bringing both extraordinary opportunities and complex challenges. If you would like to discuss how enterprise grade AI can be procured and deployed to enhance your business, please get in touch with Tim Wright or Nathan Evans

Featured Insights

How can we partner with you?

Fladgate has always been structured around deep relationships, creating true partnerships with clients.

get in touch
Photo