UK Government Launches £8.5 Million Grants Program to Boost AI Safety Research

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 10/24/2024
In News

UPDATE — SEPTEMBER 2025: The UK’s Systemic AI Safety Grants Programme has moved from planning to action since its launch in late 2024. In February 2025, the government and the AI Safety Institute awarded funding to around 20 projects across UK universities and labs, with themes ranging from detecting and mitigating deepfake misuse in elections to strengthening AI resilience in finance, energy, and public-sector decision-making. Several projects also pioneered methods for “systemic red teaming” of AI infrastructure.

International partnerships quickly followed. By spring 2025, some of the UK-funded projects were co-led with U.S. and Canadian researchers, dovetailing with trilateral AI safety principles announced earlier in the year. These collaborations highlight the UK’s push to build global consensus on AI risk management while reinforcing its own role as a hub for systemic AI safety research.

In July 2025, the Department for Science, Innovation and Technology confirmed a Phase 2 extension of the programme. It came with another £4–4.5 million to be awarded later this year. Calls for new proposals are expected in October, with decisions anticipated by early 2026.

Outputs from the first round are already influencing policy. Findings were incorporated into the AI Safety Institute’s Summer 2025 “State of AI Systemic Risks” report, which underscored vulnerabilities in supply chain dependencies and flagged risks tied to AI in public administration. Also, that report was presented to policymakers ahead of the Toronto follow-up to the 2024 AI Safety Summit.

 

ORIGINAL NEWS STORY:

 

UK Government Launches £8.5 Million Grants Program to Boost AI Safety Research

 

The UK government has launched a new grants program to support research that tackles risks from artificial intelligence (AI). The program focuses on threats such as deepfakes, misinformation, and cyberattacks. Led by the AI Safety Institute, in partnership with the Engineering and Physical Sciences Research Council (EPSRC) and Innovate UK, the initiative provides £4 million in its first phase.

 

Grants Aim to Improve Systemic AI Safety

 

Researchers across the country can apply for grants of up to £200,000 each. These awards support projects that address the growing challenges created by AI systems. The program is part of a wider national effort to promote responsible AI development and strengthen public trust in these technologies. As AI becomes more common in sectors such as healthcare, energy, and finance, the government wants to ensure that systems operate safely and reliably. The funding aims to identify vulnerabilities in the infrastructure that supports AI technologies. By improving “systemic AI safety,” researchers will look at how AI behaves inside larger systems. This includes preventing failures in financial services, critical infrastructure, and other high-impact environments.

 

Leaders Stress the Need for Public Trust

 

Secretary of State for Science, Innovation, and Technology Peter Kyle emphasized the need for trust in AI. He said he wants to speed up AI adoption to boost growth and improve public services, but he views safety as essential. According to Kyle, the grants will help researchers design systems that people can trust from the start. Ian Hogarth, Chair of the AI Safety Institute, noted that the program will fund around 20 projects in its first phase. He said the research will help identify and reduce risks linked to deepfakes, AI failures, and other systemic threats. Hogarth added that the work will build practical tools for managing risks across many sectors.

 

Program Supports International Collaboration

 

Collaboration is a core part of the program. UK researchers are encouraged to work with international partners to create a shared approach to AI safety. This aligns with the mission of the AI Safety Institute, which evaluates advanced AI models and contributes to global standards. The government will review applications based on the risks each project addresses and the strength of the proposed solutions. Researchers must submit their proposals by November 26, 2024, and the government plans to announce the successful applicants by the end of January 2025. The first round of grants will be issued in February 2025.

Long-Term Investment in AI Safety

 

The grants are part of an £8.5 million fund announced during the AI Seoul Summit in May 2024. Additional funding rounds will follow as the program expands. Also, the government expects the first wave of projects to deepen the national understanding of AI risks. As well as strengthen the safe deployment of AI systems across critical sectors.

 

Need Help?

 

Keeping track of the growing AI regulatory landscape can be difficult. Therefore, if you have any questions or concerns, don’t hesitate to reach out to BABL AI. Hence, their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter