What is the EU AI Act?
The European Union is once again leading the way in digital regulation with its latest piece of legislation, the Harmonised Rules on Artificial Intelligence, or the EU AI Act. The EU has been on the cutting edge when it comes to digital rights and digital regulations, whether it’s the General Data Protection Regulation (GDPR), passed back in April 2016 which deals with information privacy and human rights, or the Digital Services Act, passed in July 2022 which helps moderate online information and social media content. Now, the EU is working on standards for managing AI systems. The governing body is looking to minimize potential risks and potential harm while ensuring the safety and rights of all humans.
There have been several incidents of legal, ethical and biased uses of AI. Companies, journalism outlets, academia, nonprofits and governmental bodies have found bias in AI over the years across the globe. There are several examples over the years, including in 2018 when Microsoft acknowledged that use of AI in its offerings may result in reputational harm or liability. In 2019, Denmark found that its tax fraud detection AI was incorrectly flagging low income and immigrant groups more than native Danes. Even AI-powered tools that were used during the COVID-19 pandemic to help save lives instead raised red flags about privacy and accuracy concerns. AI’s use has only accelerated since these issues and the problems have only increased across the landscape.
The EU AI Act was proposed in April 2021 before being drafted and adopted in December 2022. Over the past year, there have been several amendments and revisions before the latest version was approved in June 2023. It’s expected that a final version of the EU AI Act will be approved before the end of 2023, just in time for the European Parliament elections in 2024. Even with approval, there will likely be a two-year implementation period. So, don’t expect all the regulations to take effect until 2026 at the earliest.
That’s why now is the time to understand who this massive piece of legislation applies to. First and foremost, AI systems established within the EU must comply. However, the EU AI Act not only applies to AI systems developed and used within the EU, but also to countries outside the EU that have had their AI systems introduced and/or used within the EU market. So just because your AI system is in America, doesn’t mean you’re free of this law if it’s in the EU marketplace. That’s not all though, even AI providers and users located outside the EU come under the jurisdiction of the EU AI Act if their AI systems outcomes or results are utilized or have an impact within the EU. To summarize, most companies and others are going to have to adhere to the EU AI Act in some way.
However, there are AI systems that are exempt under the EU AI Act. AI systems that are still being researched and tested before being sold are exempt from the EU AI Act. That is, as long as they follow basic rights and laws, and are not tested in real-life situations. For example, if you’re a pharmaceutical company developing an AI system to assist in the discovery of new drugs, you’re using AI to analyze vast datasets of chemicals, as well as those interactions between chemicals. As long as that AI is used in a controlled research environment, you’re exempt from the EU AI Act until it is considered safe and effective. Another entity exempt from the EU AI Act is government authorities from other countries or international groups working under international agreements. It also does not apply to AI systems made only for military uses. In addition, AI parts given out for free under open-source licenses don’t need to follow the regulation, except for large general AI models like ChatGPT or DALL-E.
If you have questions on how this could affect your company or would like help preparing for an EU AI Act Conformity Assessment, reach out to BABL AI and one of their Audit Experts can help.