California Judicial Council Approves Guardrails for Generative AI Use in Court System

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 07/28/2025
In News

In a significant move toward regulating artificial intelligence in the legal system, the California Judicial Council approved a new rule of court and a standard of judicial administration that establish the state’s first formal guardrails on the use of generative AI in court-related work. The decision comes amid growing use of tools like ChatGPT across sectors and mounting concerns over transparency, bias, and the integrity of legal proceedings.

 

Effective September 1, 2025, the new policy framework requires all California courts that permit the use of generative AI by judicial officers or court staff to adopt a local policy addressing key risks. Courts must implement their policies by December 15, 2025.

 

The reforms stem from recommendations by the Judicial Council’s Artificial Intelligence Task Force, formed by Chief Justice Tani Cantil-Sakauye in May 2024 to evaluate the promises and pitfalls of emerging AI technologies in the judiciary.

 

“Our goal is to promote responsible innovation without compromising confidentiality, fairness, or public trust,” said Hon. Brad R. Hill, who chairs the task force. “Generative AI can be a helpful tool in court operations, but it must be used thoughtfully and ethically.”

 

Rule 10.430: A Framework for Court Staff and Non-Adjudicative Work

 

Under the newly adopted California Rules of Court, Rule 10.430, courts that do not ban generative AI must implement policies governing its use by court staff and judicial officers in tasks outside of decision-making. These policies must prohibit entry of confidential or nonpublic information into public AI systems, require accuracy verification and correction of hallucinated outputs, and mandate disclosure when any publicly distributed material consists entirely of generative AI outputs.

 

The rule also bars AI use that unlawfully discriminates or disparately impacts protected groups and requires alignment with all applicable laws and ethics codes.

 

Importantly, courts are not mandated to adopt a specific model policy but may tailor local rules, provided they address these core areas. A Judicial Council-provided model policy is available as a reference.

 

Standard 10.80: Guidance for Judicial Officers in Their Adjudicative Role

 

For judges and justices using generative AI in their decision-making processes, the council adopted California Standards of Judicial Administration, Standard 10.80. While not mandatory, the standard outlines best practices for AI use in adjudicative work. It strongly discourages entering confidential information into public AI tools and urges judges to verify accuracy, remove bias, and consider disclosing AI-generated content provided to the public.

 

The task force stopped short of banning generative AI in judicial decision-making, arguing that ethical boundaries are already covered by existing judicial canons. Critics, however, raised concerns that without firm restrictions, judges could rely on AI tools inappropriately, potentially undermining the credibility of court decisions.

 

Mixed Reactions from Legal Community

 

The proposal was circulated for public comment earlier this year and received 19 submissions from judges, bar associations, academics, and advocacy groups. Opinions varied sharply. Some called for a complete ban on AI in adjudication, while others urged the Council to require stricter disclosures or uniform statewide policies.

 

Hon. Lamar Baker, Associate Justice of the Second District Court of Appeal, opposed the standard for judges, warning it could “undermine public confidence in the judiciary” and advocated for a moratorium on AI in decision-making until more ethical guidance is available.

 

Meanwhile, groups like the California Employment Lawyers Association supported the framework but pushed for uniform statewide standards to prevent a patchwork of policies across counties.

 

Looking Ahead: Education, Oversight, and Flexibility

 

The Judicial Council emphasized that the rules are intended to be a first step. Courts retain the option to fully prohibit AI use, and the model policy is expected to evolve alongside technology. The AI Task Force is also developing education programs, FAQs, and additional guidance materials in collaboration with the Center for Judicial Education and Research.

 

With generative AI tools already embedded in legal research products and administrative platforms, California’s courts are now tasked with navigating the tension between innovation and judicial integrity.

 

“The law must keep pace with technology, but not at the expense of due process,” Hill said. “This policy gives courts a principled way forward.”

 

Need Help?

 

If you’re concerned or have questions about how to navigate the global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter