UPDATE — OCTOBER 2025: Since the launch of the UK’s £5 million AI Security Challenge Fund, the program—managed by the AI Security Institute (AISI)—has expanded significantly as part of the government’s broader AI safety and assurance strategy. In June 2025, the AISI refined the fund’s structure to make it more accessible to small research teams, shifting to monthly reimbursement payments and introducing updated priority research areas aligned with national AI security goals.
The Challenge Fund has since evolved into a multi-tiered funding ecosystem totaling more than £20 million in combined AI safety investments. This includes the £15 million “Alignment Project”, launched in collaboration with Canada’s AI Safety Institute, AWS, and Schmidt Sciences, which supports large-scale studies on misaligned AI behavior, existential risk, and control mechanisms. Grant sizes under this program range from £50,000 to £1 million, broadening participation across academia, nonprofits, and industry. Earlier, in February 2025, the government distributed £8.5 million in related grants through a joint AISI–Innovate UK–EPSRC initiative, funding roughly 20 projects focused on critical challenges such as deepfakes, misinformation, and AI infrastructure resilience.
ORIGINAL NEWS STORY:
UK Launches £5M AI Security Challenge Fund to Tackle Critical Risks and Boost Public Trust
The UK government launched a £5 million AI Security Challenge Fund to accelerate research on securing artificial intelligence systems and building public trust in the technology. Managed by the AI Security Institute, the new initiative will award grants of up to £200,000 to researchers and nonprofits working on AI safety and oversight.
The Challenge Fund targets four key areas: preventing AI misuse, protecting critical systems from failure, enabling robust human oversight, and reducing systemic risks in sectors like finance, healthcare, and energy. As AI systems become more advanced and autonomous, the risks of unintended consequences and malicious use grow—making investment in AI safety essential, officials said.
“AI is at the heart of our Plan for Change—driving economic growth, creating jobs, and transforming public services,” said Feryal Clark, Minister for AI and Digital Government. “But to unlock its full potential, we must ensure AI systems are secure, resilient, and trusted.”
The fund aims to generate real-world solutions to emerging threats, with a focus on advancing human oversight capabilities and ensuring AI systems align with societal values. As part of the UK’s broader strategy to become a global AI leader, the fund also seeks to remove adoption barriers by boosting public confidence in the safety of AI technologies.
Ian Hogarth, Chair of the AI Security Institute, said the funding will help address the most urgent open questions in AI safety. “Whether that’s ensuring AI systems remain resilient against misuse, or maintaining human control over autonomous systems, this fund will help build the evidence base we need to tackle these risks.”
Applications opened today, with grant winners to be announced within 12 weeks. The Challenge Fund aligns with the UK government’s Plan for Change, which emphasizes responsible AI adoption to improve public services and productivity nationwide.
Need Help?
If you’re wondering how these measures, or any other AI regulations and laws worldwide could impact you and your business, don’t hesitate to reach out to BABL AI. Their Audit Experts can address your concerns and questions while offering valuable insights.


