The “International AI Safety Report 2025,” led by an international panel of artificial intelligence (AI) experts and government representatives, warns that rapid advancements in AI pose significant risks if left unchecked. The report, spearheaded by Professor Yoshua Bengio and backed by 30 countries, the UN, the EU, and the OECD, provides a detailed assessment of AI’s current and future capabilities, its risks, and potential mitigation strategies.
The findings underscore the growing concerns surrounding general-purpose AI—a category that includes large language models and AI agents capable of performing a wide variety of tasks with minimal human oversight. The report highlights that while AI presents extraordinary opportunities for economic growth, healthcare, and scientific discovery, its rapid development outpaces regulatory frameworks, leaving critical gaps in governance and safety.
The report notes that AI capabilities have increased significantly in recent months. General-purpose AI models now demonstrate expert-level proficiency in fields such as scientific reasoning, cybersecurity, and even biological research. While these advancements drive innovation, they also introduce new risks, particularly in cyberattacks, misinformation, and autonomous decision-making.
One of the most concerning areas identified is AI’s potential to be misused for cyber offense and biological threats. The report cites examples of AI-generated cybersecurity vulnerabilities and the ability of AI models to provide guidance on the development of toxic compounds. Experts warn that without stringent safeguards, AI could enable malicious actors to scale sophisticated attacks beyond current detection capabilities.
Additionally, the manipulation of public opinion through AI-generated misinformation remains a pressing issue. The rise of deepfake technology, AI-generated propaganda, and automated disinformation campaigns could destabilize elections and undermine trust in institutions, the report warns.
The report also flags systemic risks, including the concentration of AI development in a few countries, widening global inequalities in AI access, and the environmental impact of training massive AI models.
The lack of comprehensive global AI governance is a recurring theme in the report. While initiatives such as the EU AI Act and the Council of Europe’s AI Convention have set early regulatory benchmarks, enforcement remains uneven across different jurisdictions.
The report recommends a combination of legally binding and non-binding measures to ensure AI safety. These include sector-specific regulations, international collaboration on AI risk assessment, and increased transparency in AI development. It also calls for robust oversight mechanisms to prevent unintended consequences, including data privacy violations, bias in AI decision-making, and loss of human oversight in critical systems.
Moreover, experts emphasize the need for early warning systems and pre-emptive risk management strategies. The report highlights that AI’s trajectory is unpredictable, and governments should not wait for catastrophic incidents before implementing stricter controls.
Need Help?
If you have questions or concerns about these reports, or any global AI guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.