find-partner-btn-inner

AI Round-Up - March 2025

February saw the AI landscape continue to evolve at a breakneck pace, with significant developments shaping the future of this transformative technology. The AI Action Summit in Paris brought together global leaders, innovators and policymakers to address critical themes such as public service AI, the future of work and global AI governance, with a controversial keynote delivered by US Vice President JD Vance in his first trip overseas since assuming office.

European initiatives

InvestAI initiative announced

The European Union unveiled InvestAI, an ambitious initiative which is aimed at mobilising €200 billion for AI development across Europe, fostering AI innovation across member states. Drawing comparisons to the US-led Stargate initiative, InvestAI focuses on funding start-ups, expanding research capabilities and ensuring ethical AI development. Key details include a €20 billion European fund specifically for AI gigafactories, with plans to establish four AI gigafactories across the EU, each housing approximately 100,000 next-generation AI chips, with an emphasis on sustainability. Funding sources include existing EU programmes such as the Digital Europe Programme, Horizon Europe and InvestEU, with member states encouraged to contribute through their cohesion funds.

OpenEuroLLM: Europe's answer to US and Chinese AI?

A coalition of over 20 European entities, including companies, universities and supercomputing centres, is developing OpenEuroLLM, an open-source language model to rival AI assistants like ChatGPT and DeepSeek. OpenEuroLLM will support 35 languages and be made accessible to citizens, businesses and public administrations. The project is set to provide a European alternative in the rapidly evolving AI landscape.

EU issues guidelines on AI system definitions

The European Commission released updated guidelines clarifying definitions for AI systems under its proposed Artificial Intelligence Act. These guidelines aim to provide consistency for developers and regulators while addressing ambiguities in earlier drafts. This move is part of the EU's broader effort to balance innovation with accountability in the rapidly evolving field of AI.

First AI Act deadlines come and go

The EU AI Act, effective since 2 August last year, has introduced several critical compliance deadlines for 2025. On 2 February, the first major milestone was reached with a ban on AI practices deemed to pose "unacceptable risks" such as social scoring, exploitative biometric categorisation, and certain uses of facial recognition technology. On the same day, the Act's Article 4 came into effect, which mandates that employers equipping or utilising AI systems must ensure their workforce possesses adequate AI literacy.

AI Liability Directive withdrawn

In a surprising turn of events, the European Commission announced its decision to withdraw the AI Liability Directive from its 2025 work programme, citing "no foreseeable agreement" on the law. According to reports, the Commission will now assess whether to table another proposal or choose an alternative approach. This move, coming in the wake of criticism from Vice President JD Vance at the Paris summit, signals a potential shift in the EU's approach to AI regulation and has sparked debate about the balance between innovation and accountability in the rapidly advancing field of AI. Some critics have argued that this withdrawal represents a setback in establishing clear legal frameworks for AI accountability - without a dedicated directive, questions remain about how to address damages caused by AI systems, potentially hindering innovation and eroding public trust in the technology.

EU gears up to respond to US AI chip export controls

The EU is preparing a multi-pronged response to US export controls on AI chips, aiming to mitigate economic disruption and bolster its own AI development. Diplomatic efforts are underway to seek non-discriminatory treatment, while the EU assesses the impact on its market and supply chains. Strategic initiatives like the Chips Act and the Chips Fund aim to boost domestic chip manufacturing and reduce reliance on non-EU sources. The EU is also exploring collaborative solutions with the US, offering greater transparency and co-operation in exchange for exemptions, and advocating for common international AI standards. The overarching goal is to safeguard the transatlantic AI technology supply chain while promoting the EU's strategic independence in this critical sector.

UK initiatives

Britain declines to sign AI Summit Declaration

At the Paris AI Action Summit, the UK stood alongside the US in declining to sign a declaration on “inclusive and sustainable” AI development, which was endorsed by 57 other nations, including China and India. The UK government cited concerns over the lack of “practical clarity” on global AI governance and unresolved issues around national security, emphasising decisions aligned with British interests rather than international consensus.

Spotlight on AI safety measures

While the Paris AI Action Summit generated much discussion, the International AI Safety Report 2025, commissioned at Bletchley Park and published in time for the summit, provides a thorough review of the past year's AI developments. Led by Yoshua Bengio, Professor at Université de Montréal, Founder and Scientific Director of Mila and Canada CIFAR AI Chair, and a team of 96 international experts nominated by 30 countries, the UN, EU and OECD, it will serve as a global handbook on AI safety to help support policymakers.

UK Safety Institute gets a new look

At the same time, the UK's AI Safety Institute has been rebranded as the AI Security Institute (AISI), signalling a shift towards prioritising national security and crime prevention over broader ethical concerns like bias and free speech. Technology Secretary Peter Kyle announced the change, emphasising a focus on AI-related threats such as cyber attacks, fraud and potential misuse in developing weapons. The AISI will partner with agencies like the Home Office and the National Cyber Security Centre, while a new "criminal misuse" team will address AI-related crimes. The timing of the announcement appears to have been carefully choregraphed to more closely align the UK’s approach with the course set by Vice President JD Vance at the recent AI Action Summit in Paris, signalling a shift towards prioritising national security concerns over broader ethical issues.

UK launches inquiry into AI use in financial services

The UK Treasury Committee has launched an inquiry into the use of AI in banking, pensions and other financial services. The investigation comes as recent Bank of England figures show that 75% of firms are already using AI, with an additional 10% planning to implement it within the next three years. The inquiry aims to explore how the UK financial services industry can capitalise on AI opportunities while mitigating potential threats to financial stability and safeguarding consumers, particularly vulnerable ones. The committee is seeking evidence on various aspects, including AI's current and future use in different financial sectors, its impact on productivity, and associated risks and benefits. The deadline for submissions is 17 March 2025.

Elsewhere…

Ireland announces new AI legislation

The Irish government has announced a new Regulation of Artificial Intelligence Bill as one of more than 100 pieces of legislation included in its spring 2025 legislative programme. The purpose of the Bill is to give full effect to the EU AI Act, such as the designation of national supervisory authorities.

G7/OECD Hiroshima AI process reporting framework released

The OECD launched the Hiroshima AI Process (HAIP) Reporting Framework on 7 February, to monitor compliance with the G7 Hiroshima Process International Code of Conduct for Organisations Developing Advanced AI Systems. The framework aims to promote transparency, accountability and responsible development of advanced AI systems globally. Organisations are invited to submit their first reports by 15 April 2025, with annual updates thereafter. The framework, developed through multistakeholder co-operation, includes an online platform for easy submission and public access to reports, and aligns with the G7's commitment to safe, secure and trustworthy AI development.

US court rules in AI training case

On 11 February 2025, Judge Stephanos Bibas ruled in favour of Thomson Reuters in a landmark copyright case against Ross Intelligence, finding that Ross’ use of Thomson Reuters' legal headnotes to train its AI-powered legal research tool constituted copyright infringement and was not protected by fair use. Key points of the ruling include that Thomson Reuters' case headnotes were deemed sufficiently original and creative to be copyrightable, that Ross Intelligence's use of the headnotes was commercial and non-transformative, and that Ross’ fair use defence was rejected (the Judge finding that the copying was not necessary to access underlying ideas and that Ross intended to compete with Thomson Reuters). The ruling specifically addressed 2,243 headnotes that were found to infringe Thomson Reuters’ copyrights. Whilst the decision is significant for content owners in their ongoing disputes with AI companies, the Judge distinguished Ross’ AI from generative AI, emphasising that Ross’ tool was not creating new content, but rather retrieving existing judicial opinions. The ruling leaves some issues for trial, including potential infringement of Thomson Reuters' West Key Number system and thousands of other headnotes.

Evo 2: groundbreaking biomolecular Gen AI tool released

Researchers from Arc Institute, Stanford University, and NVIDIA have unveiled Evo 2, the largest AI model for biology to date. Trained on 128,000 genomes across all domains of life, encompassing 9.3 trillion nucleotides, Evo 2 can generate entire chromosomes and small genomes while also interpreting existing DNA. The model's unprecedented 1 million token context window allows it to detect the relationships between distant DNA segments, crucial for understanding genome-wide gene regulation. Evo 2 demonstrated near state-of-the-art accuracy in predicting the effects of BRCA1 gene mutations. As a fully open-source project, Evo 2's code and training data are available on Arc Institute's GitHub, making it the most extensive publicly accessible AI model in biology.

DeepSeek sparks controversy

DeepSeek, the Chinese AI start-up founded in 2023, rocked markets with the release of its latest model, DeepSeek-R1, in January. R1, which is said to rival top US AI systems at a fraction of the cost, using fewer advanced chips, has the potential to significantly disrupt the tech industry, causing significant market value drops for giants like NVIDIA. This development has been dubbed "AI's Sputnik moment," prompting concerns about US tech supremacy and sparking debates on AI development strategies, as well as fears about security and privacy issues. In no particular order, Italy banned DeepSeek, the US Navy and US Congress blocked its use due to "security and ethical concerns", and Taiwan advised its government departments against using the platform. US Senator Josh Hawley introduced the "Decoupling America’s Artificial Intelligence Capabilities from China Act," targeting Chinese AI technologies like DeepSeek. The bill proposes severe penalties for Americans using or collaborating with Chinese AI systems, citing national security risks. Furthermore, Belgian and Irish data protection authorities launched investigations into DeepSeek's data processing and storage practices, whilst Germany, Australia, France and South Korea are all said to be planning to question the company about these concerns, whereas India is planning to host DeepSeek's AI models on local servers.

And finally… Alibaba announces $53 billion AI and cloud computing investment

Chinese tech giant Alibaba announced a massive investment of 380 billion yuan ($53 billion) in AI and cloud computing over the next three years. This investment, which surpasses the company's total AI and cloud spending over the past decade, aims to advance Alibaba's cloud computing and AI infrastructure. The move comes shortly after Alibaba's co-founder Jack Ma was seen meeting with President Xi Jinping, sparking renewed investor confidence in the Chinese tech sector. Alibaba's CEO, Eddie Wu, emphasised the transformative potential of AI, particularly artificial general intelligence (AGI) and its ability to reshape global industries.

AI adoption will undoubtedly remain a critical priority for businesses seeking to innovate and enhance efficiency throughout 2025 and beyond. The accelerating evolution of AI technologies presents both unparalleled opportunities and complex legal and ethical challenges. Achieving successful AI integration necessitates a balanced approach, one that embraces technological advancements while simultaneously prioritising robust governance frameworks and responsible deployment practices.

Wherever you are on your AI journey, we would love to hear from you. Get in touch with Tim Wright or Nathan Evans if you would like to discuss any of the contents of this article in more detail. 

Featured Insights