California Unveils Guidelines to Address AI’s Impact on Marginalized Communities

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 01/02/2025
In News

California has released the “State of California Guidelines for Evaluating Impacts of Generative AI on Vulnerable and Marginalized Communities.” Published in December 2024, these guidelines are now being implemented in the new year and emphasize equity and safety as foundational principles for the responsible deployment of generative AI (GenAI) across state programs. 

 

The guidelines are part of a broader initiative under Governor Gavin Newsom’s Executive Order N-12-23, which calls for responsible adoption of GenAI in public services. They aim to help state agencies anticipate, evaluate, and mitigate potential inequities that could arise from GenAI applications. 

 

 

At their core, the guidelines advocate for a systematic approach to designing and deploying AI solutions. By considering the unequal starting points of marginalized communities, the state hopes to foster equitable access to benefits while minimizing risks like bias or exclusion. The recommendations stress that inequities, if unchecked, could perpetuate historical disadvantages.  

 

To that end, the guidelines include a GenAI Pre-Procurement Equity Evaluation Framework. This requires state entities to assess AI tools for potential positive and negative impacts, particularly on groups historically underrepresented in public datasets or overrepresented in areas such as criminal justice or public welfare records.  

 

  

The guidelines outline several critical tools, including:  

 

  1. GenAI Equity Evaluation Checklist: This ensures fairness, accountability, and transparency in AI systems, including human oversight mechanisms to counteract biases.  

 

  1. Human-Machine Bias Reference Table: The table identifies biases such as automation and selection bias, helping designers and users mitigate their effects.  

 

  1. Community Engagement Protocols: Agencies are urged to involve affected communities at every stage—from pre-engagement to active deliberation and feedback synthesis. This includes public surveys, small group discussions, and clear communication of AI’s intended uses and limitations.  

 

The document includes recommendations on engaging trusted community networks, conducting iterative consultations, and integrating feedback into final AI tool designs. Agencies are also encouraged to provide ongoing updates to ensure tools remain aligned with community needs and legal standards.  

 

California’s guidelines align with existing state laws, including the California Information Practices Act, and reference best practices from federal and international frameworks, such as the NIST AI Risk Management Framework.

 

 

Need Help?

 
If you’re wondering how California’s AI strategy, or any other AI strategies and laws worldwide could impact you and your business, don’t hesitate to reach out to BABL AI. Their Audit Experts can address your concerns and questions while offering valuable insights.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter