G7 Cyber Experts Warn of Rising AI-Driven Risks to Global Finance

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 10/07/2025
In News

The G7 Cyber Expert Group (CEG) has issued a statement urging governments, regulators, and financial institutions to strengthen collaboration and vigilance against emerging cybersecurity risks posed by artificial intelligence, particularly generative and autonomous systems .

 

The statement, released in September 2025, emphasizes that while AI offers opportunities to enhance defenses—such as detecting anomalies, blocking AI-generated phishing, and predicting system failures—it also creates new avenues for malicious activity. Threats include hyper-personalized deepfake scams, automated exploit development, and malware capable of evolving in real time to evade detection .

 

The CEG highlighted vulnerabilities unique to AI itself. These include “data poisoning” of training sets, sensitive information leaks through interactions with public AI tools, and “prompt injection” attacks designed to manipulate outputs or extract restricted data. Such risks, the group warned, could undermine resilience and erode trust in the financial system if left unaddressed .

 

While the statement does not establish binding rules, it lays out key considerations for financial institutions, such as ensuring secure-by-design integration of AI, vetting data sources, updating incident response plans, and addressing skills gaps in AI literacy. Supervisory authorities are encouraged to integrate AI-specific risks into existing frameworks and to deepen engagement with technology firms, academia, and international partners .

 

The group also cautioned that widespread reliance on third-party AI service providers could concentrate risks: a single major incident could cascade across global markets. To mitigate this, the CEG urged proactive governance, stronger oversight, and sustained public-private dialogue.

 

“AI can bolster cyber resilience,” the group concluded, “but only if its deployment is risk-informed, collaborative, and rooted in robust oversight” .

 

Need Help?

 

If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter