UPDATE — AUGUST 2025: Since the Tony Blair Institute for Global Change published its 2024 report “Getting the UK’s Legislative Strategy for AI Right,” the UK’s AI governance landscape has shifted significantly. The call for a comprehensive legislative strategy is now being echoed in Parliament, regulators, and international forums, as the country grapples with balancing innovation against the risks posed by frontier AI systems.
In March 2025, the Artificial Intelligence (Regulation) Bill was reintroduced in Parliament after the general election. The proposal, while still a private members’ bill, reflects growing momentum for statutory regulation. It would establish a central AI Authority, mandate impact assessments, create regulatory sandboxes, and require organizations to appoint AI officers with oversight of third-party training data. This marks a sharp turn away from the government’s earlier reliance on sector-specific regulators and voluntary commitments, and it aligns more closely with the risk-based model of the EU AI Act.
Meanwhile, the AI Safety Institute (AISI) underwent a transformation in February 2025, rebranding as the AI Security Institute. Its expanded mandate includes frontier AI risk evaluation in national security and defense contexts, signaling closer ties to the Ministry of Defence. While this move gives the UK a sharper security lens on AI governance, civil society groups have raised concerns that it could compromise the independent, research-focused role originally envisioned.
Broader regulatory initiatives are also underway. In January 2025, the government opened a copyright reform consultation to clarify how copyrighted materials can be used in AI training. This is a contentious issue for creators and developers alike, and its outcome is expected to play a pivotal role in shaping AI innovation in the UK.
Internationally, the UK has positioned itself as a leader in AI safety coordination. In February 2025, it released the International AI Safety Report 2025, a comprehensive literature review of frontier AI risks, presented as part of multilateral efforts to align global evaluation standards. This complements ongoing EU AI Act implementation and U.S. safety initiatives, underscoring the UK’s emphasis on interoperability and shared governance.
ORIGINAL NEWS STORY:
Tony Blair Institute Urges Comprehensive AI Legislation to Balance Innovation and Public Safety in the UK
The report, titled “Getting the UK’s Legislative Strategy for AI Right,” by the Tony Blair Institute for Global Change offers an in-depth analysis and recommendations regarding the UK’s approach to regulating artificial intelligence (AI). With AI’s rapid advancements and growing impact on society, the UK government faces the challenge of fostering innovation while addressing public safety and ethical concerns. The report outlines the critical need for a comprehensive legislative strategy that supports the responsible development and use of AI.
The report emphasizes that while the UK has adopted a sector-specific approach to AI regulation, there are gaps that need to be addressed. Many existing regulators lack the technical expertise, resources, and powers to effectively manage AI-related risks, such as algorithmic bias, misinformation, and privacy violations. The report stresses that the UK government should increase funding for these regulators and provide them with the necessary tools to handle the evolving landscape of AI.
The authors argue for a balanced legislative strategy that both addresses public safety risks associated with AI, particularly advanced systems known as “frontier AI,” and ensures continued innovation. A proposed AI bill, expected to be drafted in the coming months, aims to make voluntary safety commitments legally binding and establish the AI Safety Institute (AISI) as an independent body to advance AI safety research.
One of the report’s key recommendations is the establishment of a focused bill that addresses the safety risks posed by frontier AI—highly capable AI models that can perform tasks surpassing today’s leading AI systems. The authors advocate for a proactive approach that builds on voluntary commitments by AI developers but includes legally binding obligations for transparency, risk management, and regular assessments of AI systems’ capabilities.
The report suggests that while frontier AI systems currently pose limited risks, the rapid pace of development means future models may introduce significant threats to cybersecurity, public safety, and privacy. Thus, it is crucial to set up frameworks that ensure AI developers prioritize safety while advancing AI capabilities.
The AI Safety Institute (AISI), which has been established as part of the UK’s broader AI governance framework, plays a pivotal role in ensuring the safe and responsible development of AI technologies. The report calls for AISI to be made a statutory body with a clear mandate to support third-party assessments, promote international safety standards, and conduct independent evaluations of AI systems. However, the report emphasizes that AISI should not act as a regulator, as this could compromise its status as an impartial and trusted technical body.
The authors also recommend that the AISI take on a leading role in global AI governance by promoting international collaboration and harmonizing safety standards with other countries, particularly the United States and the European Union. By doing so, the UK can help shape global AI safety practices and contribute to the establishment of international AI regulatory norms.
The report by the Tony Blair Institute for Global Change underscores the importance of aligning the UK’s AI legislation with emerging international regulations, such as the EU AI Act and US safety initiatives. International consistency will prevent regulatory fragmentation, which could hinder both AI safety and innovation. The report suggests that the UK should seek to collaborate with other nations on safety research and develop shared evaluation standards for AI systems.
Furthermore, the report highlights the need for mutual recognition agreements between countries, where AI systems that have undergone safety evaluations in one jurisdiction can be recognized in others. This would reduce duplication and streamline the regulatory process for AI developers, encouraging innovation while ensuring public safety.
The report also discusses the challenges of enforcing AI regulations. It argues that any AI legislation should be accompanied by incentives for compliance, particularly for developers of frontier AI systems. The government should clarify the roles of existing regulators in overseeing AI development and consider setting up a new regulator focused on frontier AI safety in the future.
To ensure public trust, Tony Blair Institute for Global Change authors recommend that the UK government adopt a flexible and incremental approach to AI legislation, allowing the regulatory framework to adapt to new challenges and emerging technologies. This approach would provide both the industry and the public with greater clarity and confidence in the UK’s ability to manage AI safely and effectively.
Need Help?
Keeping track of the growing AI regulatory landscape can be difficult. So if you have any questions or concerns, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.