California Lawmakers Propose AI Bill to Combat Algorithmic Bias

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 03/28/2024
In News

UPDATE – FEBRUARY 2026:

Assembly Bill 2930 (AB 2930) did not become law in 2024. Although it passed the California Assembly and was amended in the Senate, the bill ultimately stalled and was withdrawn before the end of the legislative session. The article below reflects the final amended version from August 2024, which narrowed the bill’s scope to focus primarily on employment-related automated decision tools.


ORIGINAL NEWS STORY:

California Lawmakers Propose AI Bill to Combat Algorithmic Bias

California legislators are advancing a proposal aimed at regulating the use of artificial intelligence in consequential decision-making. Assembly Bill 2930 (AB 2930) seeks to address the risks of algorithmic discrimination across employment and other high-impact sectors.

Introduced on February 15, 2024, the bill targets automated decision tools—AI systems that influence decisions affecting individuals’ lives, such as hiring, housing, credit, and education. The legislation defines algorithmic discrimination as unjustified differential treatment or adverse impacts based on protected characteristics such as race, gender, age, or disability.

The proposal emphasizes the need for transparency, fairness, and accountability in how automated decision tools are developed and used.

Annual Impact Assessments

The bill requires developers and deployers of automated decision tools to complete annual impact assessments. These reports must explain:

  • Why the tool exists and how it is used
  • What data the system processes
  • How its decisions affect individuals
  • What risks it poses based on protected characteristics
  • What safeguards are in place to reduce harm
  • How humans oversee or interact with the system
  • How the system has been tested for accuracy and fairness

Deployers must also notify individuals when an automated tool plays a role in a significant decision. This notice must include the tool’s purpose, a plain-language explanation of how it works, and contact information for additional inquiries.

If a decision is made fully or partially by an automated system, individuals must have access to an alternative process when feasible.

Governance Programs and Public Policies

Organizations that develop or deploy automated decision tools must establish AI governance programs. These programs must include safeguards tailored to the specific risks posed by each system and the size and resources of the organization.

The bill also requires greater public transparency. Both developers and deployers must publish policies describing the automated tools they use and the measures taken to mitigate discrimination risks.

Developers must also provide clients with clear information about each system’s intended purpose, data sources, limitations, and testing procedures.

Penalties and Enforcement

State and local prosecutors would be authorized to bring civil enforcement actions against organizations that violate the law.

Penalties could include fines of up to $25,000 per violation for algorithmic discrimination, along with additional legal remedies.

Before filing a lawsuit, prosecutors would be required to provide 45 days’ notice. This window allows organizations to correct violations and certify that they have resolved the issue.

Exemptions and Small Business Rules

The bill excludes cybersecurity tools from its scope. It also protects trade secrets included in required assessments from disclosure under the California Public Records Act.

Small businesses would largely be exempt from the bill’s audit and governance requirements. Organizations with fewer than 25 employees would not be subject to those provisions unless their automated tools affected more than 999 individuals during the previous year.

Need Help?

If you’re concerned about how this bill—or any other AI law—might affect your business, reach out to BABL AI. Hence, their Audit Experts are available to answer your questions, assess risk, and guide your organization toward responsible AI practices.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter