UPDATE — AUGUST 2025: Since the Japan AI Safety Institute (J-AISI) first published its “Guide to Evaluation Perspectives on AI Safety” in 2024, the framework has continued to evolve. The guide was updated to version 1.10 on March 28, 2025, refining its evaluation criteria for safety and ethical oversight of AI systems, including generative AI and large language models. This update reflects Japan’s commitment to keeping AI governance responsive to emerging risks, while maintaining its central principles of fairness, transparency, privacy, and security. J-AISI has also deepened its international role, working with counterparts in the U.K. and U.S. to align safety evaluation methods and testing practices.
ORIGINAL NEWS STORY:
Japan AI Safety Institute Releases Comprehensive AI Safety Evaluation Guide
The Japan AI Safety Institute published its “Guide to Evaluation Perspectives on AI Safety,” providing a thorough framework for evaluating the safety and ethics of artificial intelligence (AI) systems. As AI becomes increasingly integrated into various industries and everyday applications, the guide aims to address mounting concerns about the technology’s misuse and potential risks, particularly in generative AI and large language models (LLMs).
A Foundation for Responsible AI
The guide builds on Japan’s leadership in global AI governance, following initiatives such as the Hiroshima AI Process, which helped shape international standards for AI safety. It provides organizations with a structured approach to ensure AI systems are designed and deployed with fairness, transparency, privacy, and security in mind. As the report notes, rapid advances in AI have raised complex questions about reliability, accountability, and potential misuse. The framework encourages developers to align with Japan’s domestic policies while meeting the expectations of the broader global AI community.
Human-Centered Design and Ethical Oversight
At its core, the guide promotes a human-centered approach to AI. This principle ensures that AI systems serve people’s interests, respect human rights, and improve well-being without causing harm. It expands the definition of safety to include not just physical risks but also psychological and societal impacts. AI must not endanger individuals’ lives, property, or dignity. Fairness is another critical theme. J-AISI emphasizes that bias and discrimination in AI outputs can undermine public trust. Developers are urged to identify and mitigate these biases early in both the training and deployment stages. While perfect neutrality may be impossible, continuous evaluation and transparency help minimize unfair outcomes.
Protecting Privacy and Strengthening Security
The guide highlights data privacy as a central obligation for developers and organizations deploying AI. As AI systems increasingly process sensitive personal information, strong safeguards are needed to prevent unauthorized access or misuse. J-AISI advises developers to comply with national privacy laws and to embed privacy protection throughout the AI lifecycle. Security receives equal attention. The guide warns that AI systems are vulnerable to cyberattacks and manipulation, which could compromise their integrity or lead to dangerous outcomes. Continuous monitoring and threat detection are recommended to prevent breaches before they escalate.
Transparency and Societal Impact
Transparency plays a vital role in building trust. AI systems, particularly those powered by LLMs, can produce opaque or unpredictable outputs. J-AISI’s guide urges developers to make AI decision-making processes clear and understandable. By explaining how systems function and how outputs are generated, developers can reduce misunderstandings and strengthen accountability. The guide also encourages organizations to examine AI’s broader societal impact, especially in high-risk sectors such as healthcare, finance, and law enforcement. Developers are advised to evaluate potential unintended uses or outcomes and to establish safeguards against misuse.
A Global Model for AI Safety
J-AISI’s publication marks a proactive effort to manage the complex risks of emerging AI technologies. The guide recognizes that while AI promises innovation and progress, it also brings new challenges that demand careful governance. Its comprehensive approach—covering fairness, privacy, security, and transparency—provides a valuable model for responsible AI deployment in Japan and beyond.
Need Help?
Keeping track of the growing AI regulatory landscape can be difficult. So, if you have any questions or concerns, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.


