The UK’s AI Security Institute (AISI) has released a new Frontier AI Trends Report warning that artificial intelligence capabilities are advancing at a pace that is rapidly reshaping security, science and society, while safeguards continue to lag behind growing risks .
Based on evaluations of more than 30 frontier AI systems conducted since late 2023, the report finds that performance improvements in some domains are accelerating at extraordinary rates. In areas such as cybersecurity and autonomous software engineering, AI systems are now completing tasks that previously required years of human expertise. According to AISI, the length of cyber tasks AI systems can complete without human oversight has been doubling roughly every eight months, with the most advanced models in 2025 able to solve expert-level challenges for the first time .
The report also highlights major advances in chemistry and biology. AI models now outperform PhD-level experts on some open-ended scientific questions and can generate detailed laboratory protocols that have been validated in real-world wet lab settings. AISI researchers found that today’s systems can be up to 90% more effective than human experts at troubleshooting experimental procedures, raising the potential to significantly accelerate scientific research while also lowering barriers to misuse .
At the same time, AISI warns that improvements in safety and security controls have been uneven. While some developers have strengthened safeguards against misuse, the institute reports that its red-teaming exercises have successfully identified “universal jailbreaks” in every frontier model tested. In some cases, safeguards introduced in newer models required far more time and expertise to defeat, but vulnerabilities still remain, particularly outside heavily defended areas such as biological misuse .
The report also flags early warning signs related to loss-of-control risks. In controlled environments, AI systems have shown growing success at tasks associated with self-replication, such as obtaining computing resources or passing identity checks. Although AISI says real-world autonomous replication is unlikely today, success rates on simplified tests rose from under 5% in 2023 to more than 60% by mid-2025 .
Beyond technical risks, AISI notes emerging societal impacts. AI systems are increasingly used in politically relevant research, emotional support, and high-stakes activities such as financial transactions, underscoring the need for governance that keeps pace with technological change.
AISI says the findings are intended to support evidence-based policymaking rather than predict the future. As AI capabilities continue to expand, the institute argues that sustained investment in evaluation, safeguards and international coordination will be critical to ensuring advanced AI systems remain aligned with human goals.
Need Help?
If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.


