AI Documentation and Transparency: Partnership on AI Calls for Global Coordination

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 10/28/2024
In News

A new report from the Partnership on AI (PAI) has raised concerns about the growing patchwork of frameworks addressing AI documentation, which could lead to fragmentation if not properly coordinated.  The report, titled “Policy Alignment on AI Transparency: Analyzing Interoperability of Documentation Requirements Across Eight Frameworks,” focuses on interoperability challenges between documentation requirements in the U.S., EU, UK, and several multilateral initiatives. As AI systems increasingly power critical sectors, ensuring consistent and interoperable documentation practices is vital to safeguard transparency, accountability, and public trust in these systems.

 

Documentation for AI systems plays a critical role in transparency, allowing stakeholders such as regulators, developers, and the public to assess the safety, fairness, and reliability of AI systems. These documentation artifacts, which can include model cards, technical specifications, and incident reports, provide essential information about how AI models are developed, trained, and deployed.

 

“Consistent documentation practices help ensure AI models are evaluated properly for risks, and it supports regulatory oversight and accountability,” says the report. Without standardized documentation, comparing and regulating AI models across different jurisdictions becomes increasingly difficult. This can lead to discrepancies in how AI risks are managed, potentially leaving certain regions more vulnerable to misuse or failure of AI systems.

 

The PAI report analyzed eight key frameworks, including the EU AI Act, U.S. Executive Order on AI, and the UK AI White Paper, alongside multilateral agreements like the Hiroshima AI Process and OECD AI Principles. The analysis found that while many of these frameworks emphasize the importance of documentation, there is little consensus on the specific forms and contents of these documentation requirements.

 

Some frameworks, such as the EU AI Act, have already introduced binding documentation guidelines, particularly for general-purpose AI models and high-risk AI systems. The EU’s approach includes detailed provisions for documenting model capabilities, training data, and safety measures. By contrast, other frameworks, such as the U.S. NIST AI Risk Management Framework (RMF), offer high-level recommendations but leave many specifics to industry discretion.

 

The lack of alignment in documentation standards across these frameworks creates challenges for companies operating globally. “It is critical to avoid a scenario where different regions require entirely different sets of documentation for the same AI system,” the report states. Such discrepancies can not only increase the compliance burden on developers but also undermine global efforts to ensure AI systems are safe and trustworthy.

 

The Partnership on AI advocates for greater collaboration between governments, industry, and civil society to develop interoperable documentation requirements. The report highlights several areas where more coordinated efforts could improve alignment, including:

 

  1. Standardizing Documentation Artifacts: The report calls for global standards around documentation artifacts, such as model cards and technical documentation. This would help create a consistent framework for evaluating AI systems across different jurisdictions.

 

  1. Harmonizing Policies for Foundation Models: Foundation models—those AI systems that are trained on vast datasets and can be adapted for many uses—are a particular focus of the report. Ensuring consistent documentation for these models is critical, given their potential to power a wide range of applications, from healthcare to finance.

 

  1. Building Capacity in the Global South: The report stresses the importance of including perspectives from the Global South in these standardization efforts. Many AI frameworks, the report notes, have been developed by countries in the Global North, which may not fully address the needs and challenges faced by countries with less developed AI infrastructure.

 

  1. Leveraging Multilateral Agreements: Several international initiatives, such as the G7 Hiroshima AI Process and OECD AI Principles, emphasize the importance of international collaboration in AI governance. The report suggests that these initiatives could be expanded to focus more specifically on harmonizing documentation requirements.

 

 

Need Help?

 

Keeping track of the everchanging AI landscape can be tough, especially if you have questions and concerns about how it will impact you. Don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter