UPDATE — SEPTEMBER 2025: The EU is moving from principles to plumbing on the AI Act. CEN and CENELEC are drafting roughly 20 harmonized standards (risk management, transparency, data governance, conformity assessment, testing) that will give companies a presumption of conformity once cited in the Official Journal—now expected in late 2025/early 2026.
The European AI Office and the Joint Research Centre (JRC) have issued science-for-policy briefs clarifying what “human-centred” standards should look like (verifiable human oversight, traceable datasets, and evaluation methods that measure risks to fundamental rights—not just model accuracy). The Commission also issued guidance on general-purpose AI (GPAI) in May 2025, signaling documentation, transparency, and safety-testing duties for widely deployed foundation models, even outside “high-risk” use cases. Timeline check: bans on prohibited practices took effect in early 2025; core high-risk system duties bite from mid-2026; fuller GPAI obligations arrive into 2027. Member States are standing up their national AI supervisors and coordinating via the new European AI Board.
ORIGINAL NEWS STORY:
EU Advances Harmonized Standards for AI Act Implementation to Prioritize Public Safety and Rights
The European Commission has released new details on its ongoing work to develop harmonized standards for the EU AI Act, which came into force on August 1, 2024. As the European Union moves toward a comprehensive framework for AI governance, these standards are set to play a critical role in ensuring a fair and safe environment for AI deployment across industries, particularly benefiting small and medium-sized enterprises (SMEs).
The EU AI Act categorizes AI applications by risk level, subjecting high-risk systems to stringent requirements designed to prevent harms to individuals and society. These provisions, set to take effect after a two- to three-year transition period, will apply to sectors including healthcare, finance, transportation, and law enforcement. The harmonized standards will provide legally backed methods for companies to demonstrate compliance, creating a “legal presumption of conformity” for products adhering to these guidelines once they are published in the Official Journal of the European Union.
JRC Brief Highlights Human-Centered Requirements
The European Commission’s Joint Research Centre (JRC) published a recent Science for Policy brief outlining what makes these standards different. Unlike many international AI standards that focus on organizational goals or performance, the EU AI Act standards must directly address risks to people’s rights, safety, and health. Because of this shift, the JRC emphasizes that standards must reflect human-centered protections and not only technical efficiency.
Standardization Work Moves Forward Through CEN and CENELEC
To create these standards, the Commission issued a formal standardization request in May 2023. European standardization bodies—CEN and CENELEC—accepted the assignment and are working with industry, civil society, and experts across sectors. However, progress has been slow because the topics involve complex ethical and technical questions. Reaching broad consensus has required extensive negotiation. Despite this, the goal remains clear: build shared safeguards so that high-risk AI systems across the EU can meet consistent expectations for safety and accountability.
Stronger Rules for Data Governance and Human Oversight
A major theme of the JRC’s brief is data governance. The new standards will require strict controls on data quality and data management. These rules will go beyond typical organizational practices because they must reduce risks to fundamental rights—not just improve accuracy or speed. Another major requirement is verifiable oversight. Companies will need to implement risk controls and also prove they work. This will involve testing methods that allow regulators and users to confirm that safeguards operate as intended. In many cases, these processes will also require meaningful human oversight.
Need Help?
If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.


