What to Consider when Navigating Global AI Compliance

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 02/08/2024
In Blog

A lot of companies are beginning to look at how to manage compliance with emerging AI regulations globally. Regulations like the EU AI Act will soon be enacted, alongside various local laws requiring AI governance, testing, risk assessments, and guidelines around uses like generative AI. Because of that, it’s going to be difficult for companies to manage compliance across these diverse regulations. 

 

In that regard, companies should begin focusing on the most stringent regulations in the EU AI Act. The EU AI Act will likely get companies 95% of the way to compliance for most regulations. Striving to do the right things like governance, risk assessment focused on rights and interests beyond the company, responsible technical testing, and monitoring will facilitate compliance, rather than a piecemeal approach trying to satisfy every requirement separately.

 

All major frameworks will require AI governance committees, people responsible for managing AI risks, and outward-focused risk assessments considering impacts on rights. Companies should assess how AI systems could affect people interacting with them and not just reputational risks. These assessments should inform technical key performance indicators to measure and monitor risks over time, which gets most of the way to compliance.

 

If companies focus on ensuring their AI is high-quality and unlikely to infringe on rights, think through risks rigorously, document processes, test thoughtfully, and monitor metrics based on assessments, compliance becomes much easier. The company’s culture should be responsible AI governance, accountability, assessments focused on people’s rights, using those to inform testing and monitoring, and documenting it in policies and procedures.

 

Risk assessment and red teaming has to be use case specific, with general testing having limited value. Risk manifests in the context of a particular implementation and domain, so mitigations must be tailored. Benchmarks and metrics also need to be use case and industry specific. Broad benchmarks have some uses for widely applicable vulnerabilities like toxicity or wrong answers, but for serious risk assessment focused on a particular industry and application, customized benchmarks are needed. 

 


If you’re wondering how to abide by the EU AI Act, or any global regulation, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while helping you with your questions and concerns.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter