NIST Launches ARIA Program to Assess Societal Risks and Impacts of AI

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 05/29/2024
In News

The National Institute of Standards and Technology (NIST) has announced the launch of a new program, Assessing Risks and Impacts of AI (ARIA), designed to evaluate the societal risks and impacts of artificial intelligence (AI) systems. This initiative aims to understand the effects of AI when used regularly in realistic settings, helping to build a foundation for trustworthy AI systems. The ARIA program is set to develop methodologies to quantify how AI systems function within societal contexts once deployed. This assessment will support the U.S. AI Safety Institute’s testing efforts, contributing to the creation of reliable, safe, and fair AI technologies.

 

NIST’s new testing, evaluation, validation, and verification (TEVV) program is part of a broader effort to improve understanding of AI’s capabilities and impacts. ARIA aims to help organizations and individuals determine whether AI technologies are valid, reliable, safe, secure, private, and fair when put to real-world use.

 

“In order to fully understand the impacts AI is having and will have on our society, we need to test how AI functions in realistic scenarios — and that’s exactly what we’re doing with this program,” said U.S. Commerce Secretary Gina Raimondo. “With the ARIA program, and other efforts to support Commerce’s responsibilities under President Biden’s Executive Order on AI, NIST and the U.S. AI Safety Institute are pulling every lever when it comes to mitigating the risks and maximizing the benefits of AI.”

 

Laurie E. Locascio, Under Secretary of Commerce for Standards and Technology and NIST Director, emphasized the real-world focus of the ARIA program. “The ARIA program is designed to meet real-world needs as the use of AI technology grows. This new effort will support the U.S. AI Safety Institute, expand NIST’s already broad engagement with the research community, and help establish reliable methods for testing and evaluating AI’s functionality in the real world.”

 

ARIA builds upon the AI Risk Management Framework, which NIST released in January 2023. This framework recommends using both quantitative and qualitative techniques to analyze and monitor AI risk and impacts. ARIA aims to operationalize these recommendations by developing new methodologies and metrics to assess how well AI systems maintain safe functionality within societal contexts.

 

Reva Schwartz, the ARIA program lead at NIST’s Information Technology Lab, highlighted the comprehensive nature of the assessments. “Measuring impacts is about more than how well a model functions in a laboratory setting,” Schwartz explained. “ARIA will consider AI beyond the model and assess systems in context, including what happens when people interact with AI technology in realistic settings under regular use. This gives a broader, more holistic view of the net effects of these technologies.”

 

The results of ARIA will inform NIST’s collective efforts to create safe, secure, and trustworthy AI systems, supporting the work of the U.S. AI Safety Institute. This initiative is part of NIST’s ongoing commitment to advancing AI safety and trustworthiness through rigorous testing and evaluation.

 

NIST’s announcement of the ARIA program follows several significant developments, including the 180-day mark of the Executive Order on trustworthy AI and the U.S. AI Safety Institute’s recent unveiling of its strategic vision and international safety network. These efforts collectively aim to mitigate the risks associated with AI while maximizing its benefits for society.

 

Need Help? 

If you’re wondering how AI regulations could impact you in this ever changing landscape, don’t hesitate to reach out to BABL AI. Their team of Audit Experts is ready to offer valuable insight while answering any questions or concerns you may have.

 

Photo of Gaithersburg, MD, USA 01-30-2021: Entrance of the Gaithersburg Campus of National Institute of Standards and Technology ( NIST ), a Physical sciences lab complex under US department of commerce. — Photo by grandbrothers on depositphotos.com

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter