UPDATE — AUGUST 2025: Since the European Commission announced the initiative to create a General-Purpose AI Code of Practice in September 2024, the process has advanced significantly. Four working groups—on transparency and copyright, risk assessment, technical risk mitigation, and internal risk management—met between October 2024 and April 2025. Their output fed into a consolidated draft Code of Practice, released by the EU AI Office in June 2025 for public consultation.
The draft sets out practical obligations for developers of foundation and large language models. On transparency, it requires training data summaries, stronger provenance tracking (including watermarking of AI-generated content), and clearer copyright accountability. On risk assessment, it establishes a classification framework and mandates documented red-team testing for high-impact models. On technical safeguards, it recommends adversarial testing, robustness checks, and open vulnerability reporting modeled after bug bounty systems. On internal governance, it calls for independent oversight boards, continuous monitoring of deployed models, and regular public reporting.
While not legally binding, the Code is designed as a soft-law instrument linked to the EU AI Act. Also, adopting it would demonstrate “good faith” compliance under the AI Act and may earn companies a future “trustworthy AI” label. The EU AI Office plans to finalize the Code by late 2025, in time to serve as a bridge before the AI Act’s high-risk AI obligations take effect in 2026. Major AI providers, including OpenAI, Google DeepMind, Anthropic, and several EU-based labs, are already piloting adherence programs. Civil society groups are pressing for stronger copyright disclosure rules, while industry continues to push back on feasibility concerns.
ORIGINAL NEWS STORY:
EU AI Office Kicks Off Development of General-Purpose AI Code of Practice with Expert Chairs and Working Groups
The European Union has officially launched a significant initiative to develop the first-ever General-Purpose AI Code of Practice under the framework of the EU AI Act. The project, spearheaded by the European Artificial Intelligence Office (EU AI Office), aims to craft comprehensive guidelines for the safe and transparent use of general-purpose AI models. The kick-off plenary event, set to bring together hundreds of participants from academia, industry, and civil society, will mark the beginning of a collaborative effort to shape the future of AI governance in the EU.
Goal: Safe, Transparent and Accountable AI
The Code of Practice will outline best practices for transparency, ethical development, and risk management across AI systems. Its purpose is to address concerns related to AI in sensitive sectors such as law, cybersecurity, and healthcare. At the same time, the EU AI Office wants to give companies clarity so they can innovate responsibly.
Four Working Groups Drive the Drafting
Four expert working groups are leading the drafting:
-
Transparency and copyright
-
Risk assessment
-
Technical risk mitigation
-
Internal risk management
Nuria Oliver of the ELLIS Alicante Foundation and copyright scholar Alexander Peukert co-chair the transparency and copyright group. Oliver brings expertise in human-centric AI, while Peukert provides depth in intellectual property law.
Yoshua Bengio, a Turing Award winner known for his work in deep learning, chairs the technical risk mitigation group. His involvement underscores the EU’s focus on meaningful safeguards for general-purpose models.
Former European Parliament member Marietje Schaake leads the internal risk management group. Her background in digital rights policy supports the push for accountability inside AI-developing organizations.
Broad Participation and Continuous Feedback
The working groups will review submissions from participants and refine the draft in iterative sessions through April 2025. The EU AI Office has already collected hundreds of contributions from companies, researchers, and civil society groups. This approach ensures the code reflects shared priorities across the AI ecosystem. Ultimately, the goal is simple: establish a practical playbook so developers build AI systems that are transparent, accountable, and safe.
Need Help?
If you have questions about global AI guidelines, regulations, or compliance, reach out to BABL AI. Hence, their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.


