NIST Launches ARIA Program to Assess Societal Risks and Impacts of AI

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 05/29/2024
In News

UPDATE – FEBRUARY 2026:

Since NIST announced the Assessing Risks and Impacts of AI (ARIA) program, the initiative has moved from planning into early implementation. In January 2026, NIST confirmed that initial ARIA pilot studies had begun. These studies focus on real-world evaluations of AI systems in areas such as healthcare diagnostics, financial decision-making, and public-sector services. The pilots are designed to measure how AI performs when used regularly by people in real environments. Previously, testing only happened in laboratory settings.

ARIA is now being developed in collaboration with multiple federal agencies, including the Department of Energy and the National Science Foundation. This development also includes private-sector partners. The goal is to establish shared methodologies for assessing fairness, transparency, robustness, and societal impact across industries.

In February 2026, the U.S. AI Safety Institute announced that ARIA’s socio-technical assessment methods will be integrated into its broader Testing, Evaluation, Validation, and Verification (TEVV) framework. Consequently, future AI safety testing will increasingly evaluate both technical performance and real-world human interaction outcomes.

NIST has indicated that an interim ARIA report is expected in mid-2026. That report is anticipated to outline early findings and proposed metrics. These metrics could inform future federal guidance and contribute to updates of the AI Risk Management Framework.

International coordination has also expanded. ARIA now aligns with global AI safety and assurance efforts, including collaboration with OECD and EU partners. This helps move toward more consistent cross-border approaches to AI evaluation.

Overall, the ARIA program remains focused on building practical, evidence-based methods for understanding AI’s societal impacts. It also works toward strengthening trustworthy AI development through real-world measurement and testing.

ORIGINAL NEWS STORY:

NIST Launches ARIA Program to Assess Societal Risks and Impacts of AI

The National Institute of Standards and Technology (NIST) has announced the launch of a new program, Assessing Risks and Impacts of AI (ARIA), designed to evaluate the societal risks and impacts of artificial intelligence (AI) systems. This initiative aims to understand the effects of AI when used regularly in realistic settings. It helps to build a foundation for trustworthy AI systems. The ARIA program is set to develop methodologies to quantify how AI systems function within societal contexts once deployed. This assessment will support the U.S. AI Safety Institute’s testing efforts. It will contribute to the creation of reliable, safe, and fair AI technologies.

 

NIST’s new testing, evaluation, validation, and verification (TEVV) program is part of a broader effort to improve understanding of AI’s capabilities and impacts. ARIA aims to help organizations and individuals determine whether AI technologies are valid, reliable, safe, secure, private, and fair when put to real-world use.

 

“In order to fully understand the impacts AI is having and will have on our society, we need to test how AI functions in realistic scenarios — and that’s exactly what we’re doing with this program,” said U.S. Commerce Secretary Gina Raimondo. “With the ARIA program, and other efforts to support Commerce’s responsibilities under President Biden’s Executive Order on AI, NIST and the U.S. AI Safety Institute are pulling every lever when it comes to mitigating the risks and maximizing the benefits of AI.”

 

Supporting the AI Safety Institute

 

Laurie E. Locascio, Under Secretary of Commerce for Standards and Technology and NIST Director, noted that ARIA has a clear real-world focus. “The program is designed to meet real-world needs as AI grows,” she explained. “It will strengthen the U.S. AI Safety Institute, deepen ties with researchers, and create reliable methods for evaluating AI in everyday use.” ARIA builds on the AI Risk Management Framework (AI RMF), first released in January 2023. That framework recommended both quantitative and qualitative tools to measure AI risk. ARIA takes the next step by operationalizing those ideas with new methodologies and metrics.

 

Measuring Impacts Beyond the Lab

 

Program lead Reva Schwartz emphasized that ARIA will capture impacts in context. “Measuring impacts is more than testing in a lab,” she said. “ARIA looks at AI systems in daily use and studies what happens when people interact with them.” This broader approach, she added, provides a holistic view of AI’s net effects. By combining technical testing with human context, ARIA can show both strengths and risks of these systems.

 

Broader AI Safety Efforts

 

The results from ARIA will guide NIST’s work on safe and trustworthy AI. They will also support the U.S. AI Safety Institute’s testing programs. Together, these efforts advance the goals of President Biden’s Executive Order on trustworthy AI and align with the Institute’s new strategic vision and global safety network.

 

Need Help? 

If you’re wondering how AI regulations could impact you in this ever changing landscape, don’t hesitate to reach out to BABL AI. Their team of Audit Experts is ready to offer valuable insight while answering any questions or concerns you may have.

 

Photo of Gaithersburg, MD, USA 01-30-2021: Entrance of the Gaithersburg Campus of National Institute of Standards and Technology ( NIST ), a Physical sciences lab complex under US department of commerce. — Photo by grandbrothers on depositphotos.com

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter