GAO Report Warns of Unclear Environmental and Human Effects from Generative AI

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 04/29/2025
In News

Generative artificial intelligence (AI) holds the potential to revolutionize industries and boost productivity, but its environmental and human impacts remain poorly understood, according to a new report released by the U.S. Government Accountability Office (GAO) on Monday.

 

The report, titled “Artificial Intelligence: Generative AI’s Environmental and Human Effects,” highlights the significant energy and water resources required to train and operate generative AI models, alongside a range of human risks such as job displacement, misinformation, and cybersecurity threats.

 

GAO analysts found that while generative AI systems like large language models (LLMs) are expanding rapidly, companies are not consistently reporting the energy, water, and carbon footprints associated with training or running these systems. Estimates suggest that training one major AI model can consume thousands of megawatt-hours of electricity and millions of liters of water, but lack of standardized reporting has left significant knowledge gaps. The International Energy Agency predicts that U.S. data center energy consumption, driven in part by AI, could rise from 4% of national demand in 2022 to as high as 6% by 2026.

 

On the human side, the GAO warned of five major risks: unsafe system outputs, threats to data privacy, cybersecurity vulnerabilities, unintentional bias, and lack of accountability. Generative AI could amplify misinformation, replicate societal biases, and compromise sensitive information, the report noted. Moreover, inadequate transparency from developers hinders independent evaluation of AI models’ behavior and safety.

 

To address these risks, GAO outlined several policy options for lawmakers and regulators. These include improving data collection on the environmental impacts of AI, encouraging the development of resource-efficient technologies, promoting the use of AI accountability frameworks, and fostering the creation of industry standards for ethical AI practices. The agency also emphasized that while technical innovations may mitigate some challenges, a proactive governance approach is necessary to ensure AI benefits are equitably shared.

 

“Generative AI’s rapid growth and resource demands pose serious questions about sustainability, fairness, and security,” the GAO report states. “Without greater transparency and oversight, the risks could outweigh the benefits.”

 

 

Need Help?

 

If you’re concerned or have questions about how to navigate the U.S. or global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter