Federal agencies nearly doubled their total artificial intelligence (AI) use cases from 2023 to 2024—and increased their use of generative AI nearly nine-fold—according to a new report from the U.S. Government Accountability Office (GAO).
The July 2025 report, requested by Congress, reviewed efforts by 12 federal agencies to adopt and manage generative AI, which refers to AI systems that can create text, images, audio, and other content based on user prompts. The number of generative AI use cases reported by 11 of the agencies (excluding the Department of Defense, which is exempt from public inventory reporting) jumped from 32 in 2023 to 282 in 2024.
Use Cases
Most use cases focused on mission-support functions such as improving written communication, streamlining internal workflows, and enhancing data access. Examples include the Department of Energy’s AskOEDI tool, which answers questions about public energy datasets, and DHS’s use of AI-powered code generation tools for software development. Agencies also reported generative AI applications in health, such as the Department of Veterans Affairs’ medical imaging automation and HHS’s efforts to detect polio outbreaks using AI-driven data extraction.
But the report highlights that widespread adoption is being slowed by a host of challenges. Ten of the 12 agencies told GAO that existing federal privacy, cybersecurity, and procurement policies can hinder deployment. Others reported difficulties securing advanced computing infrastructure, navigating cloud service acquisition delays, and hiring AI-skilled personnel.
Six agencies said the technology’s fast-paced evolution has made it difficult to keep internal policies up to date. And several raised concerns over data security, AI “hallucinations,” and a lack of transparency in how generative systems produce outputs—issues that can affect trust and reliability.
Despite the hurdles, agencies are actively developing risk management protocols, usage policies, and employee training programs. Many are turning to frameworks from NIST and GAO, including the AI Risk Management Framework and the GAO AI Accountability Framework, to guide responsible use. Eleven agencies have already established generative AI use policies, and all 12 have adopted data protection training for staff.
Conclusion
The report concludes that while generative AI use is rapidly expanding across government, careful management, cross-agency collaboration, and continued policy development will be necessary to ensure that these tools are deployed safely, responsibly, and effectively.
Need Help?
If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.