As we approach the end of 2024, the Fladgate AI team reflects on a transformative year in artificial intelligence with a bumper issue of the monthly round-up. Looking back on the past month, which marked the second anniversary of ChatGPT’s launch on an unsuspecting public, the AI landscape continues to evolve rapidly, marked by significant advancements and widespread adoption across various sectors, and a proliferation of public and private sector deals. Our round-up highlights some of the key trends shaping the AI industry which is seeing the growing influence of generative AI, multimodal systems and integration into business workflows.
European initiatives
EU AI Act's scientific panel takes shape
The European Commission has opened a public consultation on the draft implementing act for establishing a scientific panel of independent experts under the AI Act. This panel will play a crucial role in advising and assisting the AI Office and national market surveillance authorities in the implementation and enforcement of the Act. The feedback period, spanning four weeks from 18 October to 15 November, invited stakeholders to contribute their insights on the panel's establishment and operational rules. All submissions, subject to compliance with established feedback guidelines, will be published and considered in finalising the initiative. This consultation represents a significant step in shaping the governance structure for AI regulation in the EU.
European rightsholders push for robust AI Act implementation to protect copyrights
A coalition of European rightsholders groups has issued a News Media Europe AI Act: Joint letter of right holders urging for "meaningful implementation of the AI Act" to safeguard copyright protections in the age of AI. The statement, addressed to incoming EU Commissioners, MEPs and member states' representatives, emphasises the need for rightsholders to "exercise and enforce their rights" regarding AI training on copyrighted data. This move comes in response to industry calls for "regulatory certainty" on AI and ahead of upcoming commissioner hearings. The AI Act requires general-purpose AI providers to publish detailed summaries of training data used, with the specifics to be determined in the Code of Practice for GPAI.
Council on AI Liability Directive faces opposition
The Council's Working Party on Civil Law Matters recently convened to discuss the proposed AI Liability Directive, with significant opposition emerging from several EU member states, particularly France, Italy and Denmark who consider the directive redundant. Critics argue that existing liability frameworks are adequate for addressing issues related to AI systems, raising concerns about the proposed shifting of the burden of proof in AI-related cases and the directive's broad scope.
First AI factories proposals submitted to European Commission
The European Commission has received seven proposals for AI factories from 15 member states, marking a significant step towards boosting AI innovation in the region. AI factories are, in effect, strategic ecosystems centred around European public supercomputers and associated data centres, designed to boost AI innovation and competitiveness across the bloc. This initiative, submitted under the EuroHPC Joint Undertaking, is part of the EU's broader strategy to foster a competitive and innovative AI ecosystem, support start-ups and SMEs, and maintain Europe's technological sovereignty in the field of AI. Commission Executive Vice President-designate Henna Virkkunen has committed to launching "at least five" AI factories within the first 100 days of her mandate. The next deadline for additional proposals is set for 1 February 2025, with Cyprus and Slovenia already expressing interest.
Finally, a draft Code of Practice for GPAI
The European Union took a significant step towards regulating general-purpose artificial intelligence (GPAI) with the delayed release of the first draft of the Code of Practice (CoP) on 14 November 2024. This 36-page document outlines key requirements for GPAI providers, including transparency measures, governance, risk assessments and mitigation strategies for models with potential systemic risks. The draft CoP is part of the EU's broader AI Act and aims to provide guidance for compliance. Stakeholders had until 28 November to submit feedback, with the final version expected by April next year, aligned with the EU AI Act’s principles. Some critics have expressed concerns that the CoP will go further than the AI Act. However, the current draft of the code repeatedly references relevant parts of the AI Act, especially in contentious areas such as third-party testing and evaluation of models.
UK initiatives
UK launches AI assurance platform for safe and ethical adoption
The UK government has unveiled a new AI assurance platform designed to promote safe, ethical and responsible use of AI across various industries, with the aim of reinforcing the country's position as a global leader in AI safety. Launched on 6 November 2024, this one-stop shop will provide a platform for businesses of all sizes with essential resources to identify and mitigate potential risks associated with AI technologies, while also offering practical tools such as impact assessment guidelines and a self-assessment tool for responsible AI management. Announcing the platform’s launch, Peter Kyle, the Secretary of State for Science, Innovation and Technology, emphasised that building trust in AI systems is crucial for unlocking their incredible potential to enhance public services and drive economic growth.
Within the same announcement was news of a new AI safety partnership between the AI Safety Institutes of the UK and Singapore, which will see the two institutes work together closely to drive forward research and work towards a shared set of policies, standards and guidance, and a common approach to the responsible development and deployment of advanced AI models across the globe.
CMA to probe Google's investment in Anthropic
The UK's competition regulator, the CMA, announced a formal investigation into Google's investment in Anthropic, having sought input earlier this year on whether the deal would stifle competition in the AI sector. The investigation reflects growing scrutiny of major tech companies' involvement in AI development and their financial ties to AI start-ups, and the increasing attention being paid by regulators to the concentration of power and resources in the rapidly evolving AI industry, particularly involving tech giants like Google.
UK Government and Microsoft ink five-year AI partnership
The UK Government and Microsoft signed a five-year agreement to enhance access to next-generation AI and cloud services for public sector organisations. The partnership aims to drive digital transformation across government bodies, offering cost savings on a suite of Microsoft products, including Microsoft 365 and the Azure cloud platform. With a focus on increasing efficiency and innovation, the deal is expected to improve service delivery for citizens while addressing the growing digital skills gap through new training programmes. Microsoft CEO, Satya Nadella, emphasised the transformative potential of AI in public services, positioning this collaboration as a pivotal step towards a more digitally adept government infrastructure.
North American initiatives
Congressional leaders push for AI legislation ahead of Trump administration
As Congressional leaders, including Chuck Schumer and Mitch McConnell, negotiate AI-related legislation during the lame-duck session before January 2025, the impending Trump administration adds urgency to their discussions. While there is bipartisan support for AI research and workforce training, contentious issues such as AI's impact on misinformation, elections and national security may complicate consensus. Schumer's "AI policy roadmap" aims to guide these efforts, with the potential for any resulting legislation to be attached to must-pass bills like government funding or the National Defense Authorization Act. With uncertainty surrounding the Trump administration's approach to AI regulation, lawmakers are keen to establish a regulatory framework that could influence future policy directions in this critical area.
DHS unveils pioneering framework for AI Safety in Critical Infrastructure
The US’s Department of Homeland Security (DHS) has unveiled the "Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure," an initiative to ensure the safe and secure deployment of AI in essential services. This voluntary framework, developed collaboratively with input from industry, academia, civil society and government stakeholders, outlines specific responsibilities for entities across the AI supply chain, including cloud providers, AI developers and critical infrastructure operators. It addresses key vulnerabilities, promotes transparency and accountability, and aims to enhance the resilience of critical systems while harnessing AI's potential benefits. The framework recommends risk-based mitigations, encourages information sharing, and aligns with existing federal AI safety initiatives.
Canada unveils new Artificial Intelligence Safety Institute
Canada has launched the Canadian Artificial Intelligence Safety Institute (CAISI), to address AI risks and foster responsible development. With an initial budget of $50 million over five years, CAISI is part of a broader $2.4 billion federal investment in AI. The institute will conduct research through two streams: applied and investigator-led research and government-directed projects. CAISI aims to leverage Canada's leading AI ecosystem, including research hubs like Mila, Amii and the Vector Institute, to advance safety-focused solutions and collaborate internationally.
Biden and Xi prioritise human oversight in nuclear AI use
US President Joe Biden and Chinese President Xi Jinping have reached a significant agreement to prioritise human decision-making over artificial intelligence in the context of nuclear weapons use. This commitment made during their recent summit underscores the importance of maintaining human control in military AI applications, particularly as concerns grow regarding China's expanding nuclear capabilities. The leaders emphasised the necessity for ongoing discussions to address the risks associated with advanced AI systems, aiming to enhance nuclear safety and prevent potential miscalculations in military operations.
House AI Task Force advocates balanced approach to AI regulation
The bipartisan House AI Task Force is developing AI legislation that stresses human oversight and a light-touch regulatory approach, which will see the US moving closer to the UK and away from the EU in terms of approach and direction. The task force is proposing a "hub-and-spoke" model, where sector-specific regulators craft tailored guidelines for their respective industries. This approach aims to address concerns such as disinformation, cybersecurity threats and synthetic content while avoiding overly burdensome regulations that could stifle innovation, particularly for small businesses. The task force's priorities include keeping humans central to AI decision-making processes, especially in sensitive applications, and maintaining the US's leadership in AI innovation. With a focus on bipartisan co-operation, the task force is working towards producing a comprehensive report by the end of 2024, which will include guiding principles and policy proposals for responsible AI development and use.
Industry initiatives and events
Microsoft and Andreessen Horowitz unite against over-regulation of AI
Tech giant Microsoft and venture capital firm Andreessen Horowitz (aka a16z) have joined forces issuing a joint statement in which they argue that regulation could stifle innovation and place undue burdens on start-ups, advocating for a market-based, federal approach to AI regulation that focuses on punishing misuse rather than proactive restrictions. Their shared vision emphasises the importance of collaboration between "Big Tech" and "Little Tech" to foster innovation and maintain the US’s economic competitiveness in the AI era. They claim this would protect innovation while addressing potential harms.
TSMC suspends advanced chip shipments to China amid US pressure
Taiwan Semiconductor Manufacturing Company (TSMC), the world's leading chip manufacturer, has reportedly halted shipments of advanced semiconductors to Chinese companies with effect from 11 November 2024 following pressure from the US Department of Commerce, following concerns about potential violations of export controls. The suspension specifically targets chips which are 7nm (nanometre) or smaller and which are crucial for AI and high-performance computing applications. This decision is likely to impact Chinese tech giants like Alibaba and Baidu who rely on TSMC's chips for their AI development. The move underscores the ongoing tensions in the global semiconductor industry, particularly between the US and China, and highlights TSMC's delicate position as it navigates geopolitical pressures while maintaining its technological leadership.
GSA publishes “AI for good” white paper in collaboration with CGI
The Global Sourcing Association and IT services firm CGI published a White Paper on Delivering Artificial Intelligence which aims to increase awareness, stimulate discussion and provide guidance to the sourcing market regarding buyers' concerns and challenges in procuring AI solutions, as well as the risks suppliers face in protecting their intellectual property. The white paper was produced following a roundtable to explore organisational readiness for ethical, responsible and accountable use of AI, involving CEOs, industry experts, academics and thought leaders. Key themes include the need for integrated approaches to AI development, balancing AI potential with ethical concerns, and navigating complex regulatory frameworks like the EU AI Act.
TCS and NVIDIA expand partnership to accelerate industry-specific AI solutions
TCS reported the expansion of its five-year alliance with NVIDIA to deliver more tailored AI offerings to customers across various industries. This expanded collaboration aims to support TCS in helping clients scale their AI adoption more effectively. TCS focus will be leveraging NVIDIA's AI Stack (e.g. AI Foundry) to develop industry-specific solutions, such as in the manufacturing sector where they are using LLMs to transform raw data into actionable insights.
Databricks and Amazon Web Services forge alliance to supercharge Generative AI
Databricks and AWS have expanded their partnership on generative AI aimed at accelerating the development of custom generative AI models. The collaboration leverages Databricks' Mosaic AI platform, powered by AWS Trainium chips, to offer enhanced capabilities for pre-training, fine-tuning, augmenting and deploying LLMs, building on the parties’ existing relationship. Key features of the alliance include improved model optimisation, enhanced security measures and simplified integration through AWS Marketplace.
AI investments continue at pace
- Read AI completed a Series B funding, valuing the AI start-up at £346 million. Read AI offers a range of features to business users, including note-taking, transcription and summarisation, which aim to streamline meeting documentation and facilitate actionable insights, allowing organisations to improve efficiency and collaboration in their workflows.
- UK start-up Tessl announced a $125 million raise consisting of a $25 million seed round led by boldstart ventures and Google Ventures (GV), followed by a $100 million Series A led by Index Ventures, with participation from Accel and GV. The company plans to launch its AI Native Software Development platform in early 2025 and has opened a waitlist for interested developers.
- Juniper Networks invested £80.4 million in AI inference company Recogni during its Series C funding round, co-led by Celesta Capital and GreatPoint Ventures. Recogni aims to develop scalable and energy-efficient solutions for running complex AI models across cloud environments and data centres, leveraging a patented AI inference accelerator that uses "pareto math" to enhance performance while reducing costs and energy consumption.
- xAI, backed by Elon Musk, is reported to be in discussions to raise up to $6 billion in a new funding round that could value the company at $50 billion, doubling its previous valuation of $24 billion from just a few months ago. This funding is primarily aimed at purchasing 100,000 Nvidia chips for its new Memphis supercomputer facility, which will provide the computational power necessary to train and operate xAI's chatbot, Grok, which has real-time access to data from X (formerly Twitter).
- GEMESYS, a pioneering AI hardware start-up based in Bochum, Germany, successfully secured €8.6 million (approximately £7.1 million) in pre-seed funding to advance its innovative chip technology. This round, led by the Amadeus APEX Technology Fund and Atlantic Labs, along with contributions from NRW.BANK, Sony Innovation Fund and Silicon Valley's Plug and Play Tech Center, aims to accelerate research and development of GEMESYS's cutting-edge AI chips designed for efficient on-device training and inference.
- Amazon announced an additional investment of $4 billion in the AI start-up Anthropic, bringing its total investment to $8 billion. As part of the agreement, Anthropic will designate AWS as its primary training partner, utilising AWS's specialised chips for developing and deploying its advanced AI models.
JLL looks to transform commercial real estate with Falcon
The integration of artificial intelligence into various industries continues to accelerate, with commercial real estate emerging as a notable frontier. JLL, a global leader in commercial real estate and investment management, announced the launch of JLL Falcon, touted as the "first comprehensive, ultra-secure AI platform for the commercial real estate industry". This innovative platform combines market data, business trends and JLL's proprietary research with generative AI models to extract valuable insights. JLL Falcon builds upon the success of JLL GPT, introduced in August 2023 as the first large language model specifically designed for the commercial real estate sector.
AI content on Wikipedia soars
Recent research reveals a striking trend in the rise of AI-generated content on Wikipedia, with approximately 5% of new English articles created containing significant AI material. This surge, detected using advanced AI detection tools, raises concerns about accountability and accuracy, as flagged articles often exhibit lower quality and may lean towards self-promotion or biased viewpoints. The findings highlight the growing influence of AI in shaping online information sources and prompt critical discussions about the implications for content reliability on platforms like Wikipedia.
Microsoft struggling to keep pace with AI demand
Microsoft's recent earnings report reveals significant challenges in meeting the surging demand for AI services due to data centre capacity constraints. While Azure revenue grew by 34% year-over-year, CFO Amy Hood indicated that current AI demand exceeds available capacity, leading to a projected slowdown in growth rates for the upcoming quarter. In response, Microsoft is investing heavily in expanding its data centre infrastructure, including a $2.9 billion project in Japan and a $3.16 billion initiative in the UK, as well as securing long-term energy supply agreements to support these expansions. Despite these hurdles, the company remains optimistic about future growth as new data centres come online, seeking to position itself favourably in the competitive cloud landscape.
And finally…rare bees sting Meta’s nuclear powered AI ambitions
In an unexpected twist, a colony of rare bees has thrown a wrench into Meta’s plans for a cutting-edge, nuclear-powered AI data centre next to the Susquehanna Steam Electric Station nuclear power plant in Pennsylvania. The discovery of these endangered pollinators on the proposed site has forced the tech giant to temporarily shelve their ambitious project, much to the amusement of environmental activists and, presumably, the bees themselves. As we wrap up this peculiar tale of insects versus innovation, it is clear that even in the fast-paced world of AI and nuclear technology, nature still has a few surprises up its sleeve.
As we approach the end of another year, and with thoughts of Sam Altman’s bombshell prediction that artificial general intelligence (AGI) will arrive by 2025, we want to wish you joy, peace and cherished moments with loved ones in what may become known as the pre-singularity era.
With that in mind we look forward to embarking on new adventures and opportunities together in the coming year and wish you all a wonderful Christmas period filled with warmth and happiness, and, when it comes, a very Happy New Year!