AI Agents Poised to Transform Society, But Governance Lags Behind, Report Warns

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 04/21/2025
In News

A new report titled “Agent Governance: A Field Guide” warns that while artificial intelligence (AI) agents capable of autonomously completing complex tasks are gaining momentum across industries, global governance structures remain woefully unprepared to manage their risks and societal impacts.

 

The guide, authored by experts in the field and released in 2025, defines AI agents as systems that can independently pursue goals in the real world with minimal human input. These agents are already used in customer service, cybersecurity, and AI research, with major tech companies forecasting mass deployment in the next few years. Salesforce CEO Marc Benioff has predicted a billion AI agents by 2026, while Meta’s Mark Zuckerberg envisions “more AI agents than people.”

 

Yet the report highlights that current AI agents still struggle with reliability, reasoning, and executing longer-term tasks. Benchmarks show that agents consistently underperform compared to humans on assignments that take over an hour. For example, Google’s Project Zero found that even advanced agents stumble on high-stakes cybersecurity tasks, while the best agent tested on SWE-bench—an AI coding benchmark—only solved 33% of verified tasks, compared to human developers.

 

Despite these limitations, some agents are proving cost-effective. Klarna reports that AI agents are handling the workload of 700 human employees in customer service, and OpenAI’s CEO Sam Altman has suggested 2025 may be the year agents enter the mainstream workforce.

 

The report presents two possible futures: one where agents augment human potential and drive a societal renaissance, and another where agents act autonomously without oversight, causing systemic failures and economic disruption.

 

To avoid the latter, the guide introduces a new taxonomy of “agent interventions” to help policymakers and developers prepare. These include alignment protocols to ensure agents behave according to human values, visibility tools like activity logging, and societal integration measures such as liability regimes and equitable access frameworks.

 

The authors stress that the field of agent governance is still in its infancy, with most proposed solutions existing only on paper. They urge governments, civil society, and the tech industry to collaborate on developing legal, technical, and policy-based safeguards before AI agents become deeply embedded in global systems.

 

“Without coordinated intervention,” the report concludes, “the pace of progress in agent development may outstrip our ability to ensure these systems are safe, fair, and aligned with human goals.”

 

 

Need Help?

 

If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter