What to Consider when Navigating Global AI Compliance
As governments introduce new rules for artificial intelligence, many companies are rethinking how they manage compliance on a global scale. Laws like the EU AI Act and other regional regulations are beginning to take effect. These laws require companies to implement governance programs, conduct risk assessments, perform technical testing, and follow strict rules for uses like generative AI.
Managing compliance across this patchwork of laws won’t be easy. But starting with the right framework can make a big difference.
Why the EU AI Act Is the Best Starting Point for Global AI Compliance
In that regard, companies should begin focusing on the most stringent regulations in the EU AI Act. The EU AI Act will likely get companies 95% of the way to compliance for most regulations. Striving to do the right things like governance, risk assessment focused on rights and interests beyond the company, responsible technical testing, and monitoring will facilitate compliance, rather than a piecemeal approach trying to satisfy every requirement separately.
All major frameworks will require AI governance committees, people responsible for managing AI risks, and outward-focused risk assessments considering impacts on rights. Companies should assess how AI systems could affect people interacting with them and not just reputational risks. These assessments should inform technical key performance indicators to measure and monitor risks over time, which gets most of the way to compliance.
If companies focus on ensuring their AI is high-quality and unlikely to infringe on rights, think through risks rigorously, document processes, test thoughtfully, and monitor metrics based on assessments, compliance becomes much easier. The company’s culture should be responsible AI governance, accountability, assessments focused on people’s rights, using those to inform testing and monitoring, and documenting it in policies and procedures.
Risk assessment and red teaming has to be use case specific, with general testing having limited value. Risk manifests in the context of a particular implementation and domain, so mitigations must be tailored. Benchmarks and metrics also need to be use case and industry specific. Broad benchmarks have some uses for widely applicable vulnerabilities like toxicity or wrong answers, but for serious risk assessment focused on a particular industry and application, customized benchmarks are needed.
Need Help?
If you’re wondering how to abide by the EU AI Act, or any global regulation, don’t hesitate to reach out to BABL AI. Their Audit Experts are ready to provide valuable assistance while helping you with your questions and concerns.