The UK government recently unveiled a new grants program to support research into safeguarding society against risks posed by artificial intelligence (AI), such as deepfakes, misinformation, and cyberattacks. The program, led by the AI Safety Institute in partnership with the Engineering and Physical Sciences Research Council (EPSRC) and Innovate UK, is designed to enhance “systemic AI safety,” with £4 million available in the first phase.
Researchers across the UK can apply for individual grants of up to £200,000 to tackle the growing societal challenges presented by AI technologies. The initiative is part of the government’s broader strategy to promote responsible AI development while ensuring public trust as AI becomes more embedded in various sectors of the economy, including healthcare, energy, and finance.
The new funding aims to strengthen research into AI risks and create solutions that help prevent AI systems from failing unexpectedly, such as in financial services or critical infrastructure. By focusing on systemic safety, researchers will look at the underlying systems and infrastructure that support AI technology, identifying vulnerabilities that could have far-reaching effects if not addressed.
Secretary of State for Science, Innovation, and Technology, Peter Kyle, emphasized the importance of public trust in AI as a cornerstone of the UK’s AI strategy. He stated, “My focus is on speeding up the adoption of AI across the country so that we can kickstart growth and improve public services. Central to that plan, though, is boosting public trust in the innovations which are already delivering real change.”
He added that the grants program would support research that ensures AI systems are safe and trustworthy from the outset, fostering confidence in their rollout across the economy.
The launch of the Systemic Safety Grants Program follows the UK government’s earlier commitment to introducing targeted regulations for companies developing the most advanced AI models. This approach seeks to regulate AI in a balanced manner, without burdening the industry with unnecessary blanket rules, allowing innovation to flourish while maintaining safety and trust.
Ian Hogarth, Chair of the AI Safety Institute, highlighted the broad objectives of the program, which will fund approximately 20 projects in the first phase. “This grants program allows us to advance broader understanding on the emerging topic of systemic AI safety. It will focus on identifying and mitigating risks associated with AI deployment in specific sectors which could impact society,” Hogarth said.
Hogarth also pointed out the importance of addressing issues like deepfakes, AI system failures, and other vulnerabilities that AI adoption may introduce. The research will help develop tools and strategies that can be applied across various sectors to mitigate these risks.
A key component of the initiative is fostering collaboration between UK-based researchers and international partners. By including international participants, the UK aims to strengthen global cooperation on AI safety, creating a shared approach to managing the risks posed by rapidly advancing AI technologies.
The program aligns with the AI Safety Institute’s mission to evaluate the safety of AI models while contributing to global AI governance. “By bringing together researchers from a wide range of disciplines and backgrounds into this process, we’re building up empirical evidence of where AI models could pose risks, so we can develop a rounded approach to AI safety for the global public good,” Hogarth added.
Applicants have until November 26, 2024, to submit proposals, with the government assessing each application based on the potential risks the research addresses and the solutions it offers. Successful applicants will be announced by the end of January 2025, with the first round of grants awarded in February 2025.
The AI Safety Institute expects this initial wave of projects to deepen the understanding of AI risks and provide insights into how society can better manage the deployment of AI systems in critical sectors. The total fund of £8.5 million, first announced at the AI Seoul Summit in May 2024, will be distributed over multiple phases, with additional grants to be awarded as the program progresses.
Need Help?
Keeping track of the growing AI regulatory landscape can be difficult. So if you have any questions or concerns, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.