UK, US, and Canada Release Trilateral Principles for AI Use in Nuclear Sector

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 09/25/2024
In News

Nuclear regulators from the United Kingdom, United States, and Canada released a landmark document outlining high-level principles for deploying Artificial Intelligence (AI) in the nuclear sector. This trilateral collaboration represents a significant step in ensuring the safe and secure use of AI technologies in the nuclear sector.

 

The paper, titled “Considerations for Developing Artificial Intelligence Systems in Nuclear Applications,” was co-authored by the UK’s Office for Nuclear Regulation (ONR), the United States Nuclear Regulatory Commission (US NRC), and the Canadian Nuclear Safety Commission (CNSC). This is the first instance of international regulators uniting to establish principles that guide AI deployment in the nuclear sector. The paper offers a comprehensive framework for AI lifecycle management and serves as a guide for developers, nuclear dutyholders, and regulators.

 

AI has the potential to revolutionize nuclear safety, security, and operations by enhancing efficiency and reducing human error. The paper highlights that AI could be used to decrease worker exposure to dangerous environments, improve predictive maintenance through data analysis, and automate tasks that would otherwise require human intelligence. However, with these advances come significant challenges, particularly in ensuring that AI systems can be trusted to operate safely in critical environments.

 

The trilateral paper outlines several key principles that nuclear dutyholders and developers must consider when deploying AI systems. One of the paper’s central tenets is the need to manage AI systems based on the consequences of failure. Given the high stakes of nuclear safety, the document emphasizes that AI must be rigorously tested to ensure it does not pose undue risks. For applications with minimal safety impact, AI may be granted more autonomy, while higher-risk systems must retain significant human oversight.

 

The integration of AI into nuclear systems requires careful attention to how humans interact with and supervise AI systems. The paper emphasizes the importance of balancing human oversight with AI autonomy to prevent over-reliance on machine decision-making, which could lead to complacency or error in critical situations. AI should enhance, not replace, human judgment, ensuring safety remains the top priority.

 

AI systems must be managed from design through deployment. The paper stresses the need for ongoing evaluation and updates to AI systems as they adapt to new data and operational environments. A well-structured lifecycle management process is critical to maintaining the reliability and safety of AI technologies.

 

Another crucial principle is the need to harmonize AI with existing nuclear safety and security standards. Since AI-specific regulatory frameworks are still developing, current standards for nuclear safety must be adapted to accommodate the unique attributes of AI. This involves considering existing engineering and safety principles and ensuring AI does not introduce unforeseen risks into nuclear systems.

 

One of the paper’s significant conclusions is that AI-specific consensus standards for the nuclear industry may not be developed quickly enough to keep up with the fast pace of AI innovation. In the meantime, nuclear operators and regulators must rely on existing nuclear safety standards while incorporating new considerations specific to AI.

 

Shane Turner, Technical Director at the ONR, highlighted the importance of this collaboration, stating, “This significant collaboration between CNSC, US NRC, and ONR will support the wider international nuclear community to understand what is important when considering the application of AI.”

 

The trilateral principles outlined in the paper are intended to serve as a starting point for further discussion and development of AI safety standards within the nuclear industry. While hurdles remain, the potential benefits of AI—such as improved safety, efficiency, and operational reliability—make it a valuable tool for the future of nuclear energy.

 

As the paper notes, “While there are hurdles to consider to successfully deploy AI, there are also potentially significant benefits to using AI. If effectively managed, negative consequences could be avoided or mitigated for many applications.”

 

 

Need Help?

 

If you have questions or concerns about global AI guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter