UPDATE — AUGUST 2025: Since the publication of the joint paper, there have been notable developments. In early 2025, the International Atomic Energy Agency (IAEA) began referencing the UK ONR, US NRC, and Canadian CNSC’s work in its own AI and nuclear safety workshops, calling the trilateral paper a “baseline model” for broader adoption. Other national regulators have since launched their own studies on AI oversight, signaling the beginnings of a possible multinational expansion.
Standardization efforts have also begun. The IEEE and IEC both launched working groups in 2025. They’ll examine “trustworthy AI in safety-critical infrastructure,” with nuclear explicitly identified as a pilot sector. These groups are exploring technical standards around AI reliability, explainability, and fail-safe designs.
At the national level, regulators have moved from principles to practical steps. In March 2025, the US Nuclear Regulatory Commission (NRC) issued a request for information to nuclear licensees on how AI could be applied in reactor monitoring, predictive maintenance, and safety analysis. The UK’s ONR began integrating AI oversight into its Technology Qualification Process for new nuclear technologies, requiring structured safety submissions that account for AI. Canada’s CNSC has launched an AI Regulatory Readiness Initiative, including supervised testbeds for licensees to trial AI in real nuclear settings. Across these regulators, emphasis has grown on explainable AI and human-in-the-loop requirements, reinforcing that critical safety decisions must remain under human authority.
What has not yet emerged is a binding international AI-nuclear safety law or a finalized consensus standard. Also, the IAEA has not yet published its own formal guidance.
ORIGINAL NEWS POST:
UK, US, and Canada Release Trilateral Principles for AI Use in Nuclear Sector
Nuclear regulators from the United Kingdom, United States, and Canada released a landmark document outlining high-level principles for deploying Artificial Intelligence (AI) in the nuclear sector. The publication marks the first international collaboration among nuclear regulators to guide the safe and secure deployment of AI technologies in this high-stakes industry.
A Global Framework for AI in Nuclear Applications
The joint paper, titled “Considerations for Developing Artificial Intelligence Systems in Nuclear Applications,” was co-authored by the UK’s Office for Nuclear Regulation (ONR), the U.S. Nuclear Regulatory Commission (NRC), and the Canadian Nuclear Safety Commission (CNSC). It presents a comprehensive framework for managing AI systems throughout their lifecycle, offering direction for developers, dutyholders, and regulators.
AI holds immense promise for improving nuclear safety and efficiency. It can help reduce worker exposure to hazardous environments, strengthen predictive maintenance through advanced data analytics, and automate time-intensive tasks. However, the paper underscores that these benefits must be balanced against the risks that accompany increased automation and AI decision-making.
Managing Risk and Human Oversight
A key message from the paper is that AI systems must be managed according to the potential consequences of failure. Low-risk applications may allow more AI autonomy, but high-risk systems—especially those impacting safety—must include strong human oversight. The document makes clear that in nuclear settings, safety-critical decisions must always remain under human control. The authors highlight the importance of human-AI collaboration. Overreliance on automated systems could lead to complacency or delayed responses during emergencies. Regulators urge that AI should support, not replace, human expertise. The goal is to ensure AI enhances decision-making without compromising accountability.
Ensuring Reliability and Lifecycle Management
The paper also calls for continuous oversight of AI systems from design through deployment. AI models must undergo ongoing testing, validation, and updates as they evolve and interact with new operational data. Lifecycle management is essential to maintaining reliability and preventing system drift that could introduce safety risks over time. Additionally, the regulators stress that AI applications in the nuclear sector must be harmonized with existing nuclear safety and security standards. While AI-specific frameworks are still emerging, operators are encouraged to adapt current safety principles to account for the unique characteristics of AI systems.
Balancing Innovation and Regulation
The report warns that AI-specific consensus standards may take years to develop, while AI technologies continue to evolve rapidly. Until dedicated standards exist, regulators and industry stakeholders must rely on existing nuclear frameworks, adapting them to incorporate new safety and ethical considerations. “This significant collaboration between CNSC, US NRC, and ONR will support the wider international nuclear community to understand what is important when considering the application of AI,” said Shane Turner, Technical Director at the ONR.
The trilateral principles serve as a foundation for future regulatory guidance and global discussion. They signal a coordinated approach to managing AI in nuclear environments—one that emphasizes trust, explainability, and accountability. The paper concludes there are hurdles to consider to successfully deploy AI, there are also potentially significant benefits. If effectively managed, negative consequences could be avoided or mitigated for many applications.
Need Help?
You might have questions or concerns about global AI guidelines, regulations and laws. Hence, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.


