Singapore Supreme Court Issues Guidelines for the Use of Generative AI in Court Proceedings

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 09/26/2024
In News

The Supreme Court of Singapore issued “Registrar’s Circular No. 1 of 2024, which introduces a groundbreaking guide on the use of Generative Artificial Intelligence (AI) tools by court users. The “Guide on the Use of Generative Artificial Intelligence Tools by Court Users” (the “Guide”) establishes a set of principles and expectations for the application of AI technologies in the preparation and submission of court documents. This development reflects the court’s recognition of the growing use of AI in legal practices and aims to balance innovation with responsibility and accuracy in court submissions.

 

Effective from October 1, 2024, the Guide applies to all matters within the Singapore Supreme Court, the State Courts—including the Small Claims Tribunals, the Employment Claims Tribunals, and the Community Disputes Resolution Tribunals—as well as the Family Justice Courts. It provides critical guidance to lawyers, self-represented litigants, and other court users, addressing how generative AI tools may be used without compromising the integrity of legal processes.

 

The Guide outlines the Singapore Supreme Court’s neutral stance on the use of generative AI tools. Rather than banning the technology, the Court allows its use under specific conditions, provided that court users take full responsibility for the accuracy and appropriateness of any AI-generated content submitted in court documents. The circular defines generative AI as software that generates content, including text, images, and audio, based on user prompts. However, the Court emphasizes that this technology is not to be confused with simple tools that only correct grammar or spelling.

 

The Guide also acknowledges the growing prevalence of AI tools, such as chatbots, in producing coherent and seemingly human-generated responses. While these tools can be powerful aids in document drafting, they come with risks, including the generation of inaccurate or fabricated information. This makes it essential for all court users, including litigants and legal professionals, to verify and take responsibility for the content produced by such tools.

 

One of the central themes of the Guide is the importance of accuracy and verification. Lawyers and self-represented individuals using generative AI must ensure that all information submitted to the court is factually accurate, true, and independently verified. The Guide stresses that AI-generated content cannot be used to create, fabricate, or manipulate evidence. For instance, while a generative AI tool can assist in drafting a preliminary version of an affidavit or statement, it cannot be used to falsify or embellish evidence.

 

The Guide further requires court users to ensure that any AI-generated content used in court documents is relevant and does not infringe on intellectual property rights. This includes ensuring that proper source attribution is provided for any referenced material, and that any personal or sensitive information shared with generative AI tools complies with laws governing data privacy, confidentiality, and legal privilege.

 

Although the Court does not mandate pre-emptive declarations about the use of AI in court documents, users should be prepared to disclose and explain the portions of their documents that were generated using AI, if requested. This disclosure may become necessary if the Court raises concerns about the accuracy or authenticity of submissions.

 

The Guide highlights the risks of relying too heavily on AI tools, especially given the tendency of some AI systems to generate inaccurate or fabricated information, a phenomenon commonly referred to as “hallucinating.” It is emphasized that generative AI cannot discern whether its output is factually correct or legally sound, which means that court users must thoroughly fact-check any AI-generated material. Moreover, the Court underscores that AI tools are not substitutes for legal research, as they may not incorporate recent developments in case law or legislation.

 

To safeguard the integrity of court documents, the Guide advises users to cross-check all references to case law, statutes, and other legal materials generated by AI against reliable sources, such as the Singapore Statutes Online or the eLitigation GD Viewer. Relying solely on AI tools to verify legal content is not considered sufficient.

 

The Guide also emphasizes the importance of respecting intellectual property rights and safeguarding confidential information when using generative AI tools. For example, any material obtained through a court order must not be disclosed or used for purposes outside the legal proceedings for which it was granted.

 

 

Need Help?


If you’re wondering how Singapore’s AI guidelines, or any other AI strategies and laws worldwide could impact you and your business, don’t hesitate to reach out to BABL AI. Their Audit Experts can address your concerns and questions while offering valuable insights.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter