find-partner-btn-inner

AI Round-Up - February 2025

January was an exceptionally news-heavy month, with the AI landscape continuing to evolve at breakneck speed and ground-breaking developments reshaping policies and, in a number of cases, plotting a new course. On the first day of his second term, President Donald Trump revoked Executive Order 14110, the signature AI governance framework of the Biden administration, then grabbed even more of the headlines with the Stargate announcement. Elsewhere, South Korea passed a comprehensive AI law, whilst at home the UK government unveiled an ambitious blueprint to "turbocharge" AI. February will also see the first of the EU AI Act's provisions coming into force.

North America

Trump dismantles Biden's AI governance framework

In a bold first day move of his second presidential term, Donald Trump revoked Executive Order 14110, effectively dismantling the most significant piece of federal AI legislation and signalling a dramatic pivot in the United States' approach to AI regulation. The repeal strips away comprehensive federal oversight mechanisms that previously required AI developers to share safety test results and adhere to strict governance protocols, replacing them with a more laissez-faire approach that prioritises innovation over stringent control. While supporters argue the move will unleash technological innovation by removing regulatory constraints, critics warn that it could compromise AI safety standards, potentially exposing society to unmitigated risks from rapidly advancing AI technologies as it positions the United States for a potentially more aggressive and unregulated AI development trajectory compared to other global tech powers. 

Shooting for the stars

Policymakers around the world looked on as President Trump, standing alongside Masayoshi Son, Sam Altman and Larry Ellison, announced the "Stargate AI" project, a plan to build a system of AI data centres with initial equity funders Softbank, OpenAI and Oracle pledging up to $500 billion of investment over the next four years. Arm, Microsoft and NVIDIA, together with Oracle and OpenAI, were named as the initial tech partners with a buildout ‘currently underway’ in Texas as other sites across the US are evaluated. Coming in Trump's first week back in office, this massive initiative aims to create a new generation of AI systems that will purportedly outperform current models in areas such as reasoning, creativity and problem-solving. Trump emphasised that Stargate AI would prioritise "American values" and focus on practical applications in defence, healthcare and economic growth.

United Kingdom

AI Opportunities Action Plan

The UK government unveiled the AI Opportunities Action Plan on 13 January 2025, a comprehensive strategy developed by tech entrepreneur Matt Clifford to position the UK as a global AI superpower. The ambitious plan, which includes 50 recommendations fully endorsed by the government, aims to accelerate AI development and adoption across the country. Key initiatives include establishing AI Growth Zones to streamline infrastructure development, expanding the AI Research Resource's computing capacity twentyfold, reforming copyright laws to facilitate AI innovation, and creating a National Data Library to unlock the value of public data. The plan also emphasises attracting global AI talent, fostering domestic AI companies, and leveraging the UK's strengths in AI safety and regulation.

AI Growth Zones: accelerating data centre development

A cornerstone of the AI Opportunities Action Plan is the concept of "AI Growth Zones" with a focus on accelerating AI infrastructure development across the United Kingdom. These dedicated zones are designed to streamline planning approvals for data centres and enhance access to energy infrastructure, particularly in de-industrialised regions. The first such zone will be established in Culham, Oxfordshire, leveraging the area's expertise in sustainable energy research. This initiative aims to fast-track investment, foster innovation and create jobs in local communities, while simultaneously addressing the growing demand for AI-specific compute capabilities. Major tech firms including Vantage Data Centers have already committed substantial investments, with plans to establish one of Europe's largest data centre campuses in Wales, potentially generating thousands of new jobs.

Copyright reforms proposed to “balance” AI innovation and creative rights

The UK government has also unveiled a consultation paper on copyright law reforms, aiming to enhance the UK’s appeal to AI developers while safeguarding the interests of the creative industries. Closing on 25 February 2025, the consultation's centrepiece is a proposed "opt-out" text and data mining (TDM) copyright exception which would allow AI companies to use copyrighted content for training models unless rightsholders explicitly reserve their rights, which would be a significant shift from the UK's current stance.

UK government responds to AI Governance Report, signalling legislative intent

The UK government published its response to the Science, Innovation and Technology Committee's report on AI governance, addressing the report’s recommendations across various themes. While largely maintaining the principles-based approach to AI regulation of the prior administration, the government did signal an intention to introduce legislation for the most powerful AI models. A consultation on these proposals is expected in the spring, as part of the government's commitment to balancing innovation with responsible AI development. The response also outlines plans to enhance regulatory capabilities, drive AI adoption in both public and private sectors, and establish a new sovereign AI function to strengthen the UK's position in frontier AI development.

South Korea

South Korea passes AI Basic Act

On 26 December 2024, South Korea’s National Assembly voted to approve and adopt the AI Basic Act, becoming the first country in Asia to pass a comprehensive AI law. Set to take effect in January 2026, the AI Basic Act’s core objectives are to advance innovation, boost exports, mitigate AI risks and promote trustworthy AI by introducing key measures, such as the classification of "high-impact AI" systems - i.e. those affecting critical sectors like healthcare and public safety - and mandating transparency, risk management and human oversight for such systems. The Basic Act does not set out substantive laws and regulations in the same way that the EU AI Act does. Rather it provides operative rules placing obligations on the Ministry of Science and ICT and other related authorities, whilst emphasising support for AI industry growth through initiatives like AI data centres, workforce development and international collaboration.

European Union

Prohibited AI systems deadline

2 February 2025 will see the EU AI Act’s ban on "prohibited systems" take effect. This marks a crucial milestone in the Act's implementation, as it will outlaw AI applications deemed to pose unacceptable risks to fundamental rights and safety. The prohibited practices include the use of subliminal techniques, exploitation of vulnerable groups, social scoring and certain applications of facial recognition and emotion recognition systems. Additionally, 'real-time' remote biometric identification systems in public spaces for law enforcement purposes will be banned, with some limited exceptions.

EDPB issues opinion on data protection and AI model training

The European Data Protection Board (EPDB) adopted a comprehensive opinion addressing the use of personal data in AI model training, providing critical guidance for compliance with GDPR. The opinion highlights key considerations, such as the applicability of legitimate interest as a legal basis for processing, the importance of balancing tests to protect individuals' rights, and the conditions under which AI models can be deemed anonymous. It underscores that unlawfully processed personal data in training can lead to enforcement actions unless subsequent use involves true anonymization, which remains technically challenging.

Second draft of the General-Purpose AI Code of Practice

The European Commission released the second draft of the General-Purpose AI Code of Practice, incorporating feedback from nearly 1,000 stakeholders, including EU Member States and international observers. This draft, shaped by consultations and workshops held in late 2024, aims to guide AI model providers in complying with the AI Act throughout their models' lifecycles and includes measures for transparency, copyright obligations, systemic risk mitigation and preliminary Key Performance Indicators (KPIs). The Code is relevant for models released after 2 August 2025, when the new regulations take effect. Further discussions and refinements are planned for early 2025, with a third draft expected by mid-February.

China

DeepSeek R1 challenging global tech giants

DeepSeek, a Chinese AI start-up, has been making significant waves in the tech industry with its latest AI model, DeepSeek R1. Launched at the end of January, R1 demonstrated performance comparable to or surpassing OpenAI's offerings while apparently being developed at a fraction of the cost compared to the billions spent by competitors. This breakthrough has caused a stir in the stock market, with Nvidia's stock dropping 17% and has led to DeepSeek's app briefly outperforming ChatGPT in US app store downloads. The company's cost-effective approach, which relies on innovative training methods and open-source technology, has sparked discussions about the future of AI development and its implications for major tech companies. Critics have warned of potential risks such as vulnerability to security exploits, jailbreaks and trapdoors, as well as potential biases favouring China's geopolitical agenda.

Industry initiatives and events

NVIDIA completes £560 million M Run:ai acquisition

NVIDIA closed its acquisition of Run:ai, an Israeli AI infrastructure orchestration start-up, for approximately £560 million after overcoming regulatory hurdles in the European Union. The deal, initially announced in April 2024, faced scrutiny from both the U.S. Department of Justice and the European Commission due to concerns about potential market dominance in the GPU and AI orchestration software markets. As part of the agreement, NVIDIA will open-source Run:ai's software, enabling compatibility with competing hardware and addressing antitrust concerns. While the European Commission has approved the transaction, the Department of Justice in the United States is reportedly still investigating NVIDIA’s purchase of Run:ai.

Microsoft expands AI models for 365 Copilot

Microsoft is said to be diversifying its AI model ecosystem for 365 Copilot by exploring and integrating models beyond OpenAI's technology, signalling a nuanced approach to enterprise AI development. While maintaining its core partnership with OpenAI, the tech giant aims to incorporate internal models like Phi-4 and potentially open-source solutions such as Meta's Llama series to enhance performance, reduce operational costs and provide more flexible AI capabilities for enterprise customers.

Anthropic AI nears $2 billion funding round, eyeing $60 billion valuation

Start-up Anthropic is reportedly in advanced talks to secure a $2 billion funding round led by Lightspeed Venture Partners, potentially valuing the company at an impressive $60 billion, underscoring the rapidly growing investor confidence in AI technologies. Founded in 2021 by former OpenAI employees, Anthropic has quickly established itself as a key player in the generative AI space with its chatbot Claude, competing directly with industry giants like OpenAI and Google.

Tech Show London Conference Programme Unveiled for 2025

Finally, Tech Show London looks set to deliver an exciting two-day event on 12-13 March 2025 at Excel London, featuring over 200 hours of expert-led sessions and bringing together 19,350+ industry leaders and 400 world-class speakers, with the Mainstage Theatre set to tackle the critical tech topics including AI ethics, digital sustainability, cybersecurity and technological innovation. If you are planning to attend, do get in touch with us.

Wrapping it up…

The global landscape of AI regulation and development is evolving rapidly, with different nations taking varied approaches to balance innovation and responsibility. The UK, EU and South Korea are demonstrating a nuanced strategy that emphasises ethical considerations and responsible AI development, while the US under President Trump's administration has taken a divergent path. As we enter what may well be a pivotal phase, the focus on ethical AI and responsible development will likely shape the future trajectory of AI innovation and its impact on society.

Wherever you are on your AI journey, we would be delighted to hear from you. Get in touch with Tim Wright or Nathan Evans if you would like to discuss in more detail. 

Featured Lawyers

Featured Insights