EU Releases Voluntary Code of Practice to Guide Compliance with AI Act for General-Purpose AI

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 07/11/2025
In News

The European Commission has published the long-awaited Code of Practice for General-Purpose AI (GPAI). It’s designed to help AI model providers align with the EU AI Act’s legal requirements on transparency, safety, and copyright.

 

The Code of Practice arrives at a critical time for industry stakeholders, many of whom have voiced concern about the lack of clarity around how to comply with the Act’s obligations—especially for developers of large, general-purpose AI models. Until the formal standards are adopted, the Code provides a temporary but actionable framework for compliance under the EU AI Act.

 

Developed through a multi-stakeholder process involving independent experts, the Code consists of three chapters: Transparency, Copyright, and Safety and Security. Each chapter corresponds to a key obligation under the EU AI Act and provides practical tools and guidance to assist providers in meeting their legal duties.

 

The Transparency chapter includes a Model Documentation Form, a standardized and user-friendly template that helps providers explain how their models work, how they were trained, and what risks they might pose—addressing the EU AI Act’s call for sufficient model transparency.

 

The Copyright chapter outlines best practices for implementing copyright compliance policies, helping providers navigate one of the most hotly debated topics in AI governance: how to respect intellectual property rights when training or deploying AI systems.

 

The Safety and Security chapter is intended for developers of the most advanced models that present systemic risk, as defined under Article 55 of the AI Act. It lays out state-of-the-art practices for risk mitigation, safety testing, and system monitoring.

 

While the Code is non-binding, it provides a defensible path to regulatory alignment for companies that choose to adopt it. Those who do not follow the Code—or an eventual EU AI standard—must demonstrate alternative means of compliance.

 

In the coming weeks, the European Commission and Member States will assess the Code’s adequacy. Additional guidance from the Commission is expected later this month to further clarify key terms and responsibilities under the EU AI Act.

 

With enforcement deadlines approaching, the publication of the Code is seen as a major milestone for AI governance in the EU—and a signal that implementation of the AI Act is moving forward.

 

Need Help?

 

If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter