GAO Report: U.S. Federal Agencies Struggle to Address AI-Related Cybersecurity Risks

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 05/30/2025
In News

A new Government Accountability Office (GAO) report has found that while most U.S. federal agencies are adopting or exploring artificial intelligence (AI), many are not adequately addressing the cybersecurity risks associated with its use—especially in critical areas such as facial recognition, natural language processing, and predictive analytics.

 

The report, released in May 2025, surveyed 23 civilian agencies and found that 20 currently use or plan to use AI, primarily to enhance mission delivery and boost productivity. Common AI applications included chatbots, image recognition, and fraud detection. However, the report raises serious concerns about agencies’ preparedness to manage AI-related cybersecurity threats and the inconsistent implementation of federal guidance.

 

The GAO identified four main categories of cybersecurity risks associated with AI: adversarial attacks (like data poisoning), exploitation of AI-generated content, theft of AI models, and misuse of AI. These risks, if left unchecked, could lead to incorrect or harmful outputs, compromised systems, or violations of privacy and national security.

 

While the Office of Management and Budget (OMB) and the Cybersecurity and Infrastructure Security Agency (CISA) have issued some guidance related to AI and cybersecurity, the GAO report found it insufficient. For example, neither agency has issued tailored guidance addressing the unique vulnerabilities introduced by AI models. CISA stated that it intends to publish a cybersecurity roadmap for AI later this year.

 

Only a few agencies reported fully implementing key AI risk management practices outlined in existing guidance. For instance, fewer than half had evaluated the effectiveness of safeguards for AI systems, and many lacked processes for assessing the trustworthiness of AI outputs or models.

 

The GAO is now recommending that OMB and CISA strengthen and expand their cybersecurity guidance to specifically include AI, and that they work with federal agencies to ensure consistent adoption. It also urges OMB to direct agencies to identify and inventory AI systems in a way that accounts for cybersecurity risks.

 

The GAO emphasized that without a comprehensive, government-wide strategy, the federal government remains vulnerable as it increasingly integrates AI into critical operations.

 

The GAO warns that as the federal government increasingly uses AI, cybersecurity threats will only grow more complex. Agencies must move beyond exploratory adoption and take decisive steps to secure these systems.

 

 

Need Help?

 

If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter