UK Government Launches £8.5 Million Grants Program to Boost AI Safety Research

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 10/24/2024
In News

UPDATE — SEPTEMBER 2025: The UK’s Systemic AI Safety Grants Programme has moved from planning to action since its launch in late 2024. In February 2025, the government and the AI Safety Institute awarded funding to around 20 projects across UK universities and research labs, with themes ranging from detecting and mitigating deepfake misuse in elections to strengthening AI resilience in finance, energy, and public-sector decision-making. Several projects also pioneered methods for “systemic red teaming” of AI infrastructure.

International partnerships quickly followed. By spring 2025, some of the UK-funded projects were co-led with U.S. and Canadian researchers, dovetailing with trilateral AI safety principles announced earlier in the year. These collaborations highlight the UK’s push to build global consensus on AI risk management while reinforcing its own role as a hub for systemic AI safety research.

In July 2025, the Department for Science, Innovation and Technology confirmed a Phase 2 extension of the programme, with another £4–4.5 million to be awarded later this year. Calls for new proposals are expected in October, with decisions anticipated by early 2026.

Outputs from the first round are already influencing policy. Findings were incorporated into the AI Safety Institute’s Summer 2025 “State of AI Systemic Risks” report, which underscored vulnerabilities in supply chain dependencies and flagged risks tied to AI in public administration. That report was presented to policymakers ahead of the Toronto follow-up to the 2024 AI Safety Summit.

 

ORIGINAL NEWS STORY:

 

UK Government Launches £8.5 Million Grants Program to Boost AI Safety Research

 

The UK government recently unveiled a new grants program to support research into safeguarding society against risks posed by artificial intelligence (AI), such as deepfakes, misinformation, and cyberattacks. The program, led by the AI Safety Institute in partnership with the Engineering and Physical Sciences Research Council (EPSRC) and Innovate UK, is designed to enhance “systemic AI safety,” with £4 million available in the first phase.

 

Researchers across the UK can apply for individual grants of up to £200,000 to tackle the growing societal challenges presented by AI technologies. The initiative is part of the government’s broader strategy to promote responsible AI development while ensuring public trust as AI becomes more embedded in various sectors of the economy, including healthcare, energy, and finance.

 

The new funding aims to strengthen research into AI risks and create solutions that help prevent AI systems from failing unexpectedly, such as in financial services or critical infrastructure. By focusing on systemic safety, researchers will look at the underlying systems and infrastructure that support AI technology, identifying vulnerabilities that could have far-reaching effects if not addressed.

 

Secretary of State for Science, Innovation, and Technology, Peter Kyle, emphasized the importance of public trust in AI as a cornerstone of the UK’s AI strategy. He stated, “My focus is on speeding up the adoption of AI across the country so that we can kickstart growth and improve public services. Central to that plan, though, is boosting public trust in the innovations which are already delivering real change.”

 

He added that the grants program would support research that ensures AI systems are safe and trustworthy from the outset, fostering confidence in their rollout across the economy.

 

The launch of the Systemic Safety Grants Program follows the UK government’s earlier commitment to introducing targeted regulations for companies developing the most advanced AI models. This approach seeks to regulate AI in a balanced manner, without burdening the industry with unnecessary blanket rules, allowing innovation to flourish while maintaining safety and trust.

 

Ian Hogarth, Chair of the AI Safety Institute, highlighted the broad objectives of the program, which will fund approximately 20 projects in the first phase. “This grants program allows us to advance broader understanding on the emerging topic of systemic AI safety. It will focus on identifying and mitigating risks associated with AI deployment in specific sectors which could impact society,” Hogarth said.

 

Hogarth also pointed out the importance of addressing issues like deepfakes, AI system failures, and other vulnerabilities that AI adoption may introduce. The research will help develop tools and strategies that can be applied across various sectors to mitigate these risks.

 

A key component of the initiative is fostering collaboration between UK-based researchers and international partners. By including international participants, the UK aims to strengthen global cooperation on AI safety, creating a shared approach to managing the risks posed by rapidly advancing AI technologies.

 

The program aligns with the AI Safety Institute’s mission to evaluate the safety of AI models while contributing to global AI governance. “By bringing together researchers from a wide range of disciplines and backgrounds into this process, we’re building up empirical evidence of where AI models could pose risks, so we can develop a rounded approach to AI safety for the global public good,” Hogarth added.

 

Applicants have until November 26, 2024, to submit proposals, with the government assessing each application based on the potential risks the research addresses and the solutions it offers. Successful applicants will be announced by the end of January 2025, with the first round of grants awarded in February 2025.

 

The AI Safety Institute expects this initial wave of projects to deepen the understanding of AI risks and provide insights into how society can better manage the deployment of AI systems in critical sectors. The total fund of £8.5 million, first announced at the AI Seoul Summit in May 2024, will be distributed over multiple phases, with additional grants to be awarded as the program progresses.

 

Need Help?

 

Keeping track of the growing AI regulatory landscape can be difficult. So if you have any questions or concerns, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter