Navigating AI Risk Management: Insights from the NIST AI Framework

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 03/04/2024
In Blog

The National Institute of Standards and Technology (NIST) released their first version of the Artificial Intelligence Risk Management Framework (NIST AI Framework) back on January 26, 2023. The NIST AI Framework serves as a risk management framework for AI systems. Despite being over a year old, the NIST AI Framework continues to help address the unique challenges of AI risks, focusing on trustworthiness and responsible AI practices to mitigate negative impacts and enhance public trust.

 

The NIST AI Framework aims to assist organizations in managing risks associated with AI systems to promote trustworthy and responsible development and use. It provides flexible and rights-preserving approaches for organizations and individuals involved in the AI system lifecycle. The NIST AI Framework is voluntary, non-sector-specific, and continually updated to adapt to evolving technology, standards, and community feedback. It is divided into two parts: one addressing AI risks and the intended audience, and the other detailing specific functions to mitigate these risks effectively.

 

When it comes to risks, development of metrics for measuring negative risk factors in AI can be institutionally biased and oversimplified. Different stages of the AI lifecycle and varied AI actors can lead to different risk perspectives. Risks in real-world settings may differ from controlled environments, and inscrutable AI systems can complicate risk measurement. Risk tolerance and prioritization are essential, with a focus on organizational integration and the management of AI risks alongside broader enterprise risks.

 

Organizations face challenges managing AI risks based on their capabilities, resources, and the diverse perspectives required across the AI lifecycle. The NIST AI Framework outlines five key socio-technical dimensions crucial for AI policy, governance, and risk management. Trustworthy AI systems must embody characteristics like reliability, safety, transparency, and fairness, which require careful trade-offs and context-specific decisions by diverse teams throughout the AI lifecycle.

 

AI actors should assess trustworthiness characteristics when deploying AI systems to ensure validity, reliability, accuracy, robustness, safety, security, resilience, accountability, transparency, and explainability. Safety considerations in the design phase should aim to prevent dangerous failures. Secure and resilient AI systems must protect against unauthorized access and respond to attacks. Transparency enhances accountability and confidence in AI systems. Resilience and security are essential for withstanding adverse events and protecting against attacks.

 

Explainability and interpretability of AI systems play a crucial role in understanding system operation and output, aiding in system trustworthiness. Privacy considerations are essential for safeguarding human autonomy and identity in AI systems, requiring adherence to norms and practices like anonymity and data control. Fairness in AI involves managing harmful biases and discrimination, with bias encompassing systemic, computational, and human-cognitive aspects. Evaluating the effectiveness of an AI Risk Management Framework (RMF) is highlighted for continuous improvement and trustworthiness enhancement. The AI RMF Core encompasses functions like governance, mapping, measuring, and managing AI risks for responsible AI system development and risk management, emphasizing the need for diverse perspectives in the process.

 

Organizations can use the NIST AI Framework to achieve outcomes through suggested actions aligned with their needs and interests. Users may tailor guidance, contribute suggestions, and integrate functions across the AI lifecycle iteratively and as needed. The GOVERN function cultivates a risk management culture, aligns with organizational principles, and addresses AI system lifecycle processes and legal considerations. Accountability, diversity, workforce inclusivity, and engaging with AI actors are key aspects of effective AI risk management within organizations.

 

Organizations dealing with AI systems face challenges in anticipating impacts due to the interdependencies among system components. The NIST AI Framework emphasizes the importance of the MAP function in understanding context, risks, interdisciplinary collaboration, system categorization, benefits, and risks mapping to make informed decisions throughout the AI lifecycle. Various subcategories under MAP help in establishing clear objectives, examining risks, considering human oversight, and measuring AI performance and trustworthiness. Through proactive risk management and engagement with diverse perspectives, organizations can enhance the integrity and reliability of their AI systems.

 

Effective testing and mitigating biases in AI systems through measurement and documentation methods are vital for system trustworthiness. The NIST AI Framework details practices for assessing, managing, and monitoring AI risks, categorized under MEASURE and MANAGE functions. MEASURE function highlights selecting appropriate metrics, involving experts, and evaluating system characteristics, while MANAGE function prioritizes, responds, and manages identified risks. Continuous application of these functions is essential for adapting to evolving risks and stakeholder expectations.

 

The NIST AI Framework outlines guidelines for managing AI risks and benefits, including documentation, monitoring, response planning, and communication strategies. It emphasizes the importance of assigning responsibilities, sustaining value, and addressing issues with AI systems to align with intended purposes. It also discusses the development of AI RMF profiles tailored to specific settings or applications to enhance risk management practices effectively.

 

Key AI actors involved in the AI lifecycle include end users, human factors professionals, domain experts, AI impact assessors, and governance entities. Third-party entities contribute to design and development, while the general public experiences AI impacts directly. Risks specific to AI systems include data representation issues, bias, complexity, privacy concerns, and reproducibility challenges. Managing privacy and cybersecurity risks is crucial, aligning with established frameworks such as the NIST AI Framework. Current frameworks may not fully address unique AI risks like harmful bias, generative AI concerns, and security vulnerabilities.

 

If you’re wondering how NIST AI Framework, and other AI regulations around the world, could impact you, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while answering your questions and concerns.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter