UPDATE — SEPTEMBER 2025: Since the release of the second draft in December 2024, the GPAI Code of Practice has moved through its consultation and finalization process:
-
Third Draft (March 2025): Issued later than planned, integrating feedback from additional workshops and Member State consultations.
-
Final Code Adopted (May 2025): The EU AI Office and drafting panel released the definitive version, aligned with the May 2 AI Act milestone for GPAI model obligations.
-
Key Refinements:
-
Transparency: GPAI providers must now publish system cards detailing training data provenance, copyright safeguards, and risk controls.
-
Systemic Risk Mitigation: Mandatory red-teaming and adversarial testing introduced for models deemed systemic-risk, plus incident response requirements.
-
Proportional Compliance: SMEs benefit from proportional obligations, with stricter requirements targeted at frontier developers.
-
Copyright & Data Use: Final code obliges providers to demonstrate lawful licensing or use of copyright-protected works in training.
-
KPIs: Compliance now tracked via metrics like incident reporting frequency, transparency adherence, and user complaint resolution.
-
-
Implementation (July 2025): The EU AI Office began applying the Code as a soft-law instrument. The European Commission confirmed adherence can create a presumption of compliance with some GPAI provisions under the AI Act.
-
Industry Adoption: Major developers (OpenAI, Anthropic, Aleph Alpha, Mistral, Google DeepMind) have committed to the Code.
-
Ongoing Debate: Civil society groups argue the text remains too industry-driven, but others view it as a vital interim tool ahead of full AI Act enforcement in 2026.
ORIGINAL NEWS STORY:
Second Draft of General-Purpose AI Code of Practice Released, Reflecting Stakeholder Feedback
The second draft of the General-Purpose AI Code of Practice, designed to align AI providers with the EU AI Act, was unveiled on December 14, 2024. Developed by a coalition of independent experts and overseen by the EU AI Office, the document incorporates extensive feedback from its inaugural draft and outlines robust guidelines for the governance of general-purpose AI models.
This evolving Code, which aims to harmonize innovation with regulation, addresses a wide range of AI applications, particularly those classified as “general-purpose AI models with systemic risk.” It provides a comprehensive framework for transparency, copyright adherence, risk assessment, and governance to ensure that AI technologies comply with EU standards while safeguarding societal interests.
The second draft builds upon its predecessor by offering clearer commitments, refined measures, and preliminary Key Performance Indicators. The updates aim to enhance the document’s practicality and proportionality, reflecting the input of over 1,000 stakeholders and 354 written submissions received since the first draft.
Among the highlighted changes:
- Transparency and Copyright Compliance: The draft emphasizes the importance of transparent documentation and adherence to copyright laws. It requires AI providers to disclose information about model training data, design specifications, and deployment practices. Providers must also implement copyright policies to prevent the unauthorized use of protected content during model training and operation.
- Risk Management for Systemic Risks: The Code outlines risk assessment protocols for high-impact AI models. It includes provisions for adversarial testing, incident reporting, and cybersecurity safeguards to mitigate potential harms. These measures are tailored to address systemic risks such as large-scale manipulation, loss of human oversight, and technological misuse.
- Stakeholder Engagement and Feedback Integration: The draft reflects inputs from EU Member States, industry leaders, and civil society. Workshops and working group meetings facilitated dialogue on risk mitigation, transparency, and governance challenges. Additionally, the Code aligns with international standards and emerging global best practices.
The Code is positioned as a “future-proof” document, designed to adapt to advancements in AI technology. This iteration introduces a structured approach to balancing innovation with accountability, ensuring proportional obligations based on provider size and model impact. The drafting team acknowledged the dynamic nature of AI and emphasized that the Code would evolve alongside technological and societal developments.
Feedback on the second draft is open until January 15, 2025. A third draft is anticipated in February 2025, incorporating further refinements based on ongoing consultations and workshops. The final version of the Code is slated for completion by May 2, 2025, coinciding with the implementation of new EU rules for general-purpose AI models.
Need Help?
If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.


