California Lawmakers Propose AI Bill to Combat Algorithmic Bias

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 03/28/2024
In News

California lawmakers are considering a bill that would regulate AI in various industries. Assembly Bill 2930 (AB 2930), introduced on February 15, seeks to regulate the use of automated decision tools powered by AI that make consequential decisions impacting individuals’ lives. The bill acknowledges the potential for these tools to contribute to algorithmic discrimination, defined as unjustified differential treatment or adverse impacts disfavoring people based on protected characteristics such as race, color, ethnicity, sex, religion, age, disability, or national origin.


A key requirement under the bill is for deployers and developers to perform annual impact assessments evaluating various aspects of the tools. These assessments must include a statement of the tool’s purpose and intended benefits, uses, and deployment contexts; a description of its outputs and how they are used to make consequential decisions; a summary of the data inputs processed; an analysis of potential adverse impacts based on protected characteristics; a description of safeguards implemented to address risks of algorithmic discrimination; details on how the tool will be used or monitored by humans; and information on how it has been or will be evaluated for validity and relevance.


AB 2930 mandates that deployers notify individuals subject to consequential decisions made or significantly influenced by automated tools, providing information about the tool’s purpose, the deployer’s contact details, and a plain language description of its operation. Furthermore, if a consequential decision is made solely based on an automated tool, the deployer must accommodate an individual’s request for an alternative selection process or accommodation, if technically feasible.


A core provision prohibits deployers from using automated decision tools in a manner that results in algorithmic discrimination. Both deployers and developers are required to establish governance programs with reasonable administrative and technical safeguards to manage the risks of algorithmic discrimination associated with the use or intended use of these tools. The safeguards should be appropriate to factors such as the tool’s use, the entity’s role, size, complexity, resources, and the technical feasibility and cost of available risk management tools.


Under the bill, deployers and developers must make publicly available a clear policy summarizing the types of automated decision tools they use or make available and how they manage the risks of algorithmic discrimination arising from their use. Developers are also obligated to provide deployers with information on the intended uses of the tools, known limitations, data used for programming or training, and details on how the tools were evaluated for validity and explainability before sale or licensing.


AB 2930 grants enforcement powers to the Attorney General, district attorneys, county counsels, city attorneys, and city prosecutors, who can bring civil actions against deployers or developers for violations. Courts can award injunctive relief, declaratory relief, reasonable attorney’s fees and litigation costs. Significantly, for violations involving algorithmic discrimination, courts can impose a civil penalty of $25,000 per violation. However, public attorneys must provide 45 days’ written notice before commencing an action for injunctive relief, allowing deployers or developers to cure the violation and avoid the claim by providing an express written statement under penalty of perjury.


While the bill aims to promote fairness and accountability in the use of AI-powered automated decision tools, it includes certain exemptions and limitations. It does not apply to cybersecurity-related technology, and trade secrets contained in impact assessments are exempt from disclosure under the California Public Records Act. Additionally, the impact assessment and governance program requirements do not apply to deployers with fewer than 25 employees unless they deployed an automated decision tool that impacted more than 999 people in the prior year.

If you’re wondering how California’s AI bill, or any other bill around the world, could impact you, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concerns.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter