NIST Launches ARIA Program to Assess Societal Risks and Impacts of AI

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 05/29/2024
In News

NIST Launches ARIA Program to Assess Societal Risks and Impacts of AI

 

The National Institute of Standards and Technology (NIST) has announced the launch of a new program, Assessing Risks and Impacts of AI (ARIA), designed to evaluate the societal risks and impacts of artificial intelligence (AI) systems. This initiative aims to understand the effects of AI when used regularly in realistic settings, helping to build a foundation for trustworthy AI systems. The ARIA program is set to develop methodologies to quantify how AI systems function within societal contexts once deployed. This assessment will support the U.S. AI Safety Institute’s testing efforts, contributing to the creation of reliable, safe, and fair AI technologies.

 

NIST’s new testing, evaluation, validation, and verification (TEVV) program is part of a broader effort to improve understanding of AI’s capabilities and impacts. ARIA aims to help organizations and individuals determine whether AI technologies are valid, reliable, safe, secure, private, and fair when put to real-world use.

 

“In order to fully understand the impacts AI is having and will have on our society, we need to test how AI functions in realistic scenarios — and that’s exactly what we’re doing with this program,” said U.S. Commerce Secretary Gina Raimondo. “With the ARIA program, and other efforts to support Commerce’s responsibilities under President Biden’s Executive Order on AI, NIST and the U.S. AI Safety Institute are pulling every lever when it comes to mitigating the risks and maximizing the benefits of AI.”

 

Supporting the AI Safety Institute

 

Laurie E. Locascio, Under Secretary of Commerce for Standards and Technology and NIST Director, noted that ARIA has a clear real-world focus. “The program is designed to meet real-world needs as AI grows,” she explained. “It will strengthen the U.S. AI Safety Institute, deepen ties with researchers, and create reliable methods for evaluating AI in everyday use.” ARIA builds on the AI Risk Management Framework (AI RMF), first released in January 2023. That framework recommended both quantitative and qualitative tools to measure AI risk. ARIA takes the next step by operationalizing those ideas with new methodologies and metrics.

 

Measuring Impacts Beyond the Lab

 

Program lead Reva Schwartz emphasized that ARIA will capture impacts in context. “Measuring impacts is more than testing in a lab,” she said. “ARIA looks at AI systems in daily use and studies what happens when people interact with them.” This broader approach, she added, provides a holistic view of AI’s net effects. By combining technical testing with human context, ARIA can show both strengths and risks of these systems.

 

Broader AI Safety Efforts

 

The results from ARIA will guide NIST’s work on safe and trustworthy AI. They will also support the U.S. AI Safety Institute’s testing programs. Together, these efforts advance the goals of President Biden’s Executive Order on trustworthy AI and align with the Institute’s new strategic vision and global safety network.

 

Need Help? 

If you’re wondering how AI regulations could impact you in this ever changing landscape, don’t hesitate to reach out to BABL AI. Their team of Audit Experts is ready to offer valuable insight while answering any questions or concerns you may have.

 

Photo of Gaithersburg, MD, USA 01-30-2021: Entrance of the Gaithersburg Campus of National Institute of Standards and Technology ( NIST ), a Physical sciences lab complex under US department of commerce. — Photo by grandbrothers on depositphotos.com

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter