The AI revolution is in overdrive. Blink and you'll miss it. Breakthroughs are dropping faster than tech stocks in a bubble burst. This month’s round-up cuts through the noise, serving up the juiciest morsels of AI progress. No fluff, just the hard-hitting developments reshaping our digital future. Miss this, and you might as well be living under a rock in the age of silicon. And it’s a bumper edition, given the pace of developments right now.
AI in medicine: navigating the product lifecycle
The European Medicines Agency (EMA) has published a reflection paper exploring the role of artificial intelligence and machine learning in the lifecycle of medicinal products, encompassing drug development, authorisation and post-market surveillance. It highlights that while AI models can offer significant advancements, their complex architectures and extensive parameters pose new risks that must be managed during both development and deployment to ensure patient safety and the integrity of clinical trial results. The paper emphasises the necessity of addressing potential biases in AI/ML applications to enhance their reliability and trustworthiness. By advocating for a risk-based approach, the EMA aims to encourage collaboration among developers, academics and regulators to maximize the benefits of AI innovations in healthcare while navigating associated regulatory challenges.
Apple, Meta and Nvidia snub EU AI pact
The European Commission has announced that over 100 companies, including Amazon, Google and Microsoft, have signed the EU AI Pact, a voluntary initiative aimed at promoting responsible AI development. Notably, tech giants Apple and Meta, and semiconductor player Nvidia have opted not to join, with Meta citing a focus on compliance with the AI Act and Apple providing no specific reason for its abstention. The pact encourages signatories to adopt governance strategies, identify high-risk AI systems and promote AI literacy among staff. Although signatories include Microsoft, Google, Amazon and OpenAI, the absence of these major players could impact the EU's regulatory efforts as it rolls out the AI Act.
Work on general purpose AI Code of Practice kicks-off
On 30 September, the European AI Office hosted an online kick-off plenary to initiate the development of the first Code of Practice for general-purpose AI models under the AI Act. This significant event attracted around 1,000 participants, including AI model providers, industry representatives, academics and civil society members, all contributing to a collaborative framework aimed at ensuring responsible AI deployment. The plenary focused on establishing working groups, timelines and expected outcomes, while also sharing insights from a recent multi-stakeholder consultation that garnered approximately 430 submissions. The iterative drafting process will unfold over the coming months, culminating in a final version of the Code to be presented in April 2025.
The Commission has appointed several academic experts as chairs and vice-chairs of working groups tasked with drafting the GPAI Code of Practice, with four working groups established to address specific aspects of the Code: (1) Transparency and copyright-related rules; (2) Risk identification and assessment measures; (3) Risk mitigation measures; and (4) Internal risk management and governance for GPAI model providers.
From what we hear, no one seems particularly happy about how the process to draft the code is shaping up.
Progress made on AI standardisation to support AI Act
The European Commission has issued a standardisation request to CEN and CENELEC, tasking them with developing European standards for AI by 30 April 2025. These standards aim to ensure AI systems in the EU market are safe, uphold fundamental rights and foster innovation. CEN-CENELEC Joint Technical Committee 21 (JTC 21) proposed a roadmap for AI standardisation which was evaluated by the Commission's Joint Research Centre, identifying gaps in existing international standards and suggesting additional standards to support the AI Act. CEN and CENELEC have published a work programme and dashboard detailing progress on developing additional standards. Some AI Harmonised Standards have already been adopted, including CEN/CLC ISO/IEC TR 24027:2023 and ISO/IEC 23894:2023. However, the completion of all Harmonised Standards is expected to be delayed until late 2025, potentially leaving companies with only a short period in which to implement them before the relevant provisions of the AI Act come into effect in August 2026.
California governor vetoes controversial AI Safety Bill
Governor Gavin Newsom has vetoed SB 1047, a contentious AI safety bill which aimed to impose safety testing requirements on large-scale AI models. The bill was reported to have been rejected on the grounds that it "only focuses on the most expensive and large-scale models," potentially overlooking risks from smaller, specialised systems. Despite vetoing SB 1047, Newsom has signed 18 other AI-related bills into law so far. These new regulations cover a range of issues including watermarking AI-generated content, combating sexual deepfakes and regulating AI in political advertising.
Google joins the party, embracing nuclear power to fuel AI ambitions
Google has joined the ranks of tech giants turning to nuclear energy to meet the soaring power demands of their artificial intelligence operations. The company has signed a groundbreaking deal with Kairos Power, a California-based firm, to purchase energy from multiple small modular reactors (SMRs). The agreement marks a significant step towards revitalising America's nuclear industry, with the first SMR slated to come online by 2030, with additional deployments planned through 2035. Google has committed to buying 500 megawatts of power from six to seven reactors, though the specific locations for these plants remain undisclosed.
Kairos Power is a relatively young company, founded in 2016. Intriguingly, its technology is still in the demonstration phase and not yet fully proven at a commercial scale. The move aligns Google with other tech behemoths like Microsoft and Oracle, both of whom are seeking clean energy solutions to power their expanding data centre networks. Not to be outdone, Amazon also joined the party, recently announcing three new agreements to support the development of nuclear energy projects, including enabling the construction of several new Small Modular Reactors.
OpenAI raises $6.6 billion, valued at $157 billion
OpenAI has successfully closed a $6.6 billion funding round, elevating its valuation to $157 billion and underscoring strong investor confidence in the potential of AI, nearly doubling the company's valuation from approximately $80 billion earlier this year. The funding will support OpenAI's ongoing research and development efforts, enhancing its products and computing capabilities. Major backers include Thrive Capital, SoftBank, Nvidia and Microsoft, reflecting sustained appetite amongst investors for AI advancements in the tech industry.
Nobel prize in physics awarded to AI pioneers Hinton and Hopfield
Geoffrey Hinton and John Hopfield have been awarded the 2024 Nobel Prize in Physics for their groundbreaking work on artificial neural networks, which laid the foundation for modern AI. The Royal Swedish Academy of Sciences recognised their contributions to machine learning, citing the widespread impact of their research on fields ranging from climate modelling to medical diagnostics. Hinton, often called the "Godfather of AI," is known for his work on deep learning and neural networks, whilst Hopfield's key contribution was the development of the Hopfield network, an associative memory model. Their combined research has been instrumental in advancing AI technology, leading to applications in areas like facial recognition, language translation and pattern recognition.
Advancements in AI for enhanced cancer detection and diagnosis
Scientists have published an article in Nature which presents a potentially groundbreaking development in cancer diagnosis and prognosis prediction using AI. The researchers have created a pathology foundation model, which is a general-purpose weakly supervised machine learning framework designed to extract pathology imaging features for systematic cancer evaluation. The study involved a large team of researchers from various institutions, with key contributions from authors like Jun Zhang, Jing Zhang and others. The model demonstrates significant potential in improving cancer diagnosis and prognosis prediction, which could have far-reaching implications for patient care and treatment planning. The research also highlights the collaborative nature of modern medical AI development, involving experts from multiple disciplines and institutions.
TikTok pivots to AI-driven content moderation
TikTok, the popular short-form video platform, is reported to be undergoing a major transformation in its content moderation approach. The company has begun laying off human moderators at its offices in Malaysia and the UK as it shifts towards a more AI-centric strategy for reviewing and filtering user-generated content. The company says that the move is expected to streamline the moderation process and potentially improve consistency in content decisions.
While AI-driven moderation offers scalability and efficiency, it also raises questions about the nuanced understanding of context and cultural sensitivities that human moderators bring to the table. TikTok has made assurances that the transition will be carefully managed to maintain the platform's community guidelines and safety standards. The move comes as social media platforms face increasing scrutiny by law makers and regulators over their content moderation practices, particularly regarding misinformation and harmful content, with regulators overseeing and enforcing compliance with recently enacted laws such as the Online Safety Act in the UK (and in Australia), The Harmful Digital Communications Act in New Zealand and the Digital Services Act in the EU.
Adobe launches Firefly Video Model in Premiere Pro
Adobe recently announced the Firefly Video Model in Premiere Pro, a new AI-powered toolset aimed at enhancing video editing capabilities. Key features include Generative Extend, which allows editors to lengthen clips with AI-generated frames, addressing common editing challenges. Adobe’s press release states that the Firefly Video Model is built with content rights in mind, as the AI is trained on licensed or public domain material. The new features are expected to be available in Premiere Pro beta later this year, streamlining the editing process for creators. So far Adobe has carefully avoided getting caught up in AI content litigation, such as the copyright lawsuits involving Stability AI and Midjourney. It updated its legal terms to make clear that it doesn't train its AI models on users' cloud content and has offered indemnification against third party IPR claims for enterprise customers using its Firefly AI tool.
Cisco invests in AI start-up CoreWeave
Cisco Systems has reportedly made an investment in CoreWeave, an AI start-up backed by Nvidia and several prominent private equity firms. The deal values CoreWeave at $23 billion, marking a significant milestone for the cloud computing provider specializing in AI workloads. This investment aligns with Cisco's strategy to strengthen its position in the rapidly growing AI market. CoreWeave, known for its GPU-intensive cloud services, has gained attention for its ability to offer high-performance computing solutions tailored for AI applications.
IBA and CAIDP release report on AI's impact on legal profession
The International Bar Association (IBA) and the Centre for AI and Digital Policy (CAIDP) have jointly published a significant report titled "The Future is Now: Artificial Intelligence and the Legal Profession." This collaboration brings together the IBA's global legal expertise and CAIDP's specialized knowledge in AI policy and human rights advocacy. The report, directed by CAIDP Founder Marc Rotenberg, examines the intersection of AI and legal practice, and highlights the growing importance of the legal profession's engagement with AI governance issues, emphasising the need for lawyers to understand and participate in shaping AI regulations and ethical frameworks.
Microsoft CEO advocates for Copyright Law reform to support AI development
Satya Nadella, the CEO of Microsoft, has called for a significant overhaul of copyright laws to facilitate the training of AI models without risking intellectual property infringement. Nadella praised Japan's more flexible approach to copyright legislation, which allows companies to use copyrighted materials for AI training without facing legal consequences. He emphasized the need for governments to establish a new legal framework that defines "fair use" of material, enabling certain situations where intellectual property can be utilised without explicit permission. This stance aligns with Microsoft's broader strategy of integrating AI into its products and services, as evidenced by the company's recent $2.9 billion investment in AI infrastructure in Japan. Nadella's advocacy for copyright reform underscores the growing tension between rapid AI advancement and existing intellectual property protections, highlighting the need for a balanced approach that fosters innovation while respecting creators' rights.
In-car tech innovations: holographic displays, AI assistants and smart "infotainment"
And finally, we note how the automotive sector is currently seeing a surge in technological advancements aimed at enhancing the driving experience. Ford has unveiled Holoflekt, a thin-film holographic system that projects essential information across the windshield, minimising driver distraction, whilst Volkswagen has partnered with Google Cloud to introduce an AI-powered Virtual Assistant in its myVW mobile app, offering US users intelligent support for various vehicle-related queries. Other notable developments include Hyundai and Kia's collaboration with Samsung Electronics to create a next-generation "infotainment" system with an open ecosystem, and Alibaba Cloud's partnership with Nvidia to improve smart mobility solutions for Chinese automakers.
The coming months will be pivotal in defining how AI integrates into our society, and staying informed will be key for all involved, with businesses faced with navigating an increasingly intricate regulatory environment. To discuss how we can support you on this journey, please get in touch with Tim Wright or Nathan Evans.