The European Commission has released new details on its ongoing work to develop harmonized standards for the EU AI Act, which came into force on August 1, 2024. As the European Union moves toward a comprehensive framework for AI governance, these standards are set to play a critical role in ensuring a fair and safe environment for AI deployment across industries, particularly benefiting small and medium-sized enterprises (SMEs).
The EU AI Act categorizes AI applications by risk level, subjecting high-risk systems to stringent requirements designed to prevent harms to individuals and society. These provisions, set to take effect after a two- to three-year transition period, will apply to sectors including healthcare, finance, transportation, and law enforcement. The harmonized standards will provide legally backed methods for companies to demonstrate compliance, creating a “legal presumption of conformity” for products adhering to these guidelines once they are published in the Official Journal of the European Union.
The European Commission’s Joint Research Centre (JRC) recently released a Science for Policy brief, shedding light on the unique demands these standards must meet. Unlike general industry standards, the EU AI Act standards are designed to specifically address the risks AI could pose to individuals’ rights, safety, and health. According to the JRC, this focus marks a departure from traditional international AI standards, which have typically prioritized the operational goals of AI-deploying organizations rather than human-centered concerns.
In developing these standards, the Commission initiated a formal standardization request in May 2023, more than a year before the Act’s enforcement. European standardization bodies, including CEN (European Committee for Standardization) and CENELEC (European Committee for Electrotechnical Standardization), accepted the task, which involves collaborative input from various sectors and organizations. The objective is to build consensus on the essential safeguards AI systems must adopt, but progress has been gradual. Standardization stakeholders report that reaching agreements on key issues has been challenging due to the complex ethical and technical considerations AI regulation entails.
One major aspect highlighted in the JRC brief is data governance within AI systems. The forthcoming standards will require stringent data quality and governance measures that go beyond typical organizational data handling practices. These protocols are intended to mitigate specific risks to human rights and well-being, rather than solely achieving efficiency or accuracy. This requirement aligns with the EU AI Act’s overarching aim of prioritizing individual protections over business objectives in high-risk applications of AI.
A further stipulation for the AI standards is the need for verifiable oversight mechanisms. Standards will demand that companies not only implement measures to manage risks but also demonstrate their effectiveness through testing protocols, involving human oversight where necessary. This approach ensures that the safeguards are both actionable and trackable, providing tangible proof of safety and reliability for users and regulators alike.
Need Help?
If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.