U.S. GAO Analyzes Risks and Safeguards in Generative AI Development and Deployment

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 11/11/2024
In News

The U.S. Government Accountability Office (GAO) recently released a report examining the fast-evolving commercial sector of generative artificial intelligence (AI). The report highlights common practices used by developers to ensure responsible deployment, alongside key challenges that compromise model reliability and security.

 

The GAO report identifies several best practices employed by commercial developers in the responsible development of generative AI, emphasizing comprehensive testing, multidisciplinary evaluations, and safety protocols.

 

  1. Benchmark Testing: Developers rely on standardized benchmark tests to assess model accuracy and reliability across various domains. These tests evaluate model performance in areas like general reasoning, mathematical problem-solving, and coding capabilities, providing a quantitative foundation for further improvement.

 

  1. Multidisciplinary Review: AI developers frequently involve experts from multiple fields, such as cybersecurity, ethics, and legal studies, in evaluating the potential risks of their models before release. This collaborative approach helps detect vulnerabilities, particularly in relation to sensitive or harmful content, and informs modifications for safer deployment.

 

  1. Red Teaming for Security: Red teaming—emulating potential attacks—is central to AI risk management. This strategy is applied across AI models to identify flaws that malicious users might exploit. The report notes that developers now conduct red teaming focused on threats like unauthorized replication and cybersecurity breaches. Red teaming practices have proven essential in mitigating risks, though developers acknowledge they may not address every potential vulnerability.

 

  1. Post-Deployment Monitoring: Once AI models are live, developers continue to monitor them for misuse. This includes tracking user interactions that could indicate attempts to exploit or manipulate model outputs, such as spreading misinformation or generating explicit content. Developers often use such data to restrict access for users who violate safety policies.

 

  1. Data Policies for Privacy and Safety: Companies have established data policies to guide the ethical collection and use of information for model training, reducing reliance on personal data where possible. Privacy and safety standards are applied to training datasets to minimize biases and protect users’ personal information.

 

Despite best practices, generative AI technology faces significant limitations. Developers concede that models are not entirely reliable and can still produce inaccurate or biased outputs. This issue, often referred to as “hallucination” or “confabulation,” is of particular concern, as it may lead users to trust and disseminate false information unwittingly. Additionally, the models’ reliance on large datasets, often scraped from publicly available sources, introduces the risk of data poisoning—a form of attack in which malicious actors alter training data to manipulate model outputs.

 

The report also warns about vulnerabilities like prompt injection attacks and jailbreaking. In these scenarios, users can manipulate AI prompts to circumvent safeguards, potentially leading to harmful outcomes, such as spreading malware or generating offensive content. Combatting these issues requires continuous monitoring and rapid response protocols to mitigate emerging threats.

 

A pressing issue highlighted by the GAO is the lack of transparency around the data used to train generative AI models. Developers typically disclose only general information about their datasets, which may include content publicly available on the internet, data purchased from third parties, or user-provided data. This limited transparency has raised concerns about data privacy and the potential misuse of copyrighted information.

 

To address these issues, companies have implemented privacy evaluations throughout the development process, filtering training data to reduce personal and sensitive information. However, the efficacy of these measures remains uncertain, with experts noting the difficulty of completely removing personal data from massive datasets.

 

The GAO plans to continue investigating generative AI, with future reports expected to delve into the societal and environmental impacts of the technology. This ongoing assessment reflects the government’s growing focus on managing the risks associated with rapid AI advancements and promoting responsible development practices that align with public safety and ethical standards.

 

 

Need Help?

 

If you’re concerned or have questions about how to navigate the U.S. or global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter