UPDATE — AUGUST 2025: Since the European Commission announced the initiative to create a General-Purpose AI Code of Practice in September 2024, the process has advanced significantly. Four working groups—on transparency and copyright, risk assessment, technical risk mitigation, and internal risk management—met between October 2024 and April 2025. Their output fed into a consolidated draft Code of Practice, released by the EU AI Office in June 2025 for public consultation.
The draft sets out practical obligations for developers of foundation and large language models. On transparency, it requires training data summaries, stronger provenance tracking (including watermarking of AI-generated content), and clearer copyright accountability. On risk assessment, it establishes a classification framework and mandates documented red-team testing for high-impact models. On technical safeguards, it recommends adversarial testing, robustness checks, and open vulnerability reporting modeled after bug bounty systems. On internal governance, it calls for independent oversight boards, continuous monitoring of deployed models, and regular public reporting.
While not legally binding, the Code is designed as a soft-law instrument linked to the EU AI Act. Adopting it would demonstrate “good faith” compliance under the AI Act and may earn companies a future “trustworthy AI” label. The EU AI Office plans to finalize the Code by late 2025, in time to serve as a bridge before the AI Act’s high-risk AI obligations take effect in 2026. Major AI providers, including OpenAI, Google DeepMind, Anthropic, and several EU-based labs, are already piloting adherence programs. Civil society groups are pressing for stronger copyright disclosure rules, while industry continues to push back on feasibility concerns.
ORIGINAL NEWS STORY:
EU AI Office Kicks Off Development of General-Purpose AI Code of Practice with Expert Chairs and Working Groups
The European Union has officially launched a significant initiative to develop the first-ever General-Purpose AI Code of Practice under the framework of the EU AI Act. The project, spearheaded by the European Artificial Intelligence Office (EU AI Office), aims to craft comprehensive guidelines for the safe and transparent use of general-purpose AI models. The kick-off plenary event, set to bring together hundreds of participants from academia, industry, and civil society, will mark the beginning of a collaborative effort to shape the future of AI governance in the EU.
The General-Purpose AI Code of Practice will serve as a foundational document outlining best practices for transparency, risk management, and the ethical development of AI systems. It is designed to address emerging concerns over AI’s role in sectors like healthcare, law, and cybersecurity, while also promoting innovation.
At the heart of this initiative are the chairs and vice-chairs of the four working groups tasked with developing the code. These experts, chosen for their deep expertise in computer science, AI law, and governance, will lead the drafting process, bringing diverse perspectives from across Europe and beyond. The working groups will focus on transparency and copyright, risk assessment, technical risk mitigation, and internal risk management of AI systems.
Nuria Oliver, Director of the ELLIS Alicante Foundation, and Alexander Peukert, a leading expert in European copyright law, will co-chair the working group on transparency and copyright-related rules. Oliver’s extensive experience in human-centric AI and Peukert’s legal background will be critical in addressing the challenges AI poses to intellectual property and transparency.
The working groups will synthesize feedback from participants in a series of iterative discussions from October 2024 to April 2025. With nearly 430 submissions already collected through a multi-stakeholder consultation, the initiative promises to be a robust, inclusive process.
Yoshua Bengio, one of the world’s foremost experts in AI, will chair the group focused on technical risk mitigation. Bengio, a Turing Award laureate, is best known for his pioneering work in deep learning. His involvement signals the importance of creating strong safeguards for AI systems, ensuring they can be developed safely without compromising on innovation.
The overarching goal of the General-Purpose AI Code of Practice is to ensure that AI is developed in a responsible and inclusive manner. The EU AI Office is emphasizing the importance of creating a code that promotes not only innovation but also accountability and ethical AI deployment. The code will require developers to implement clear risk assessment frameworks, disclose the use of AI in specific applications, and ensure that AI systems are designed with transparency at their core.
Marietje Schaake, a former Member of the European Parliament and an expert in AI governance, will chair the working group on internal risk management and governance. Schaake has long been an advocate for stronger digital rights and governance mechanisms, and her leadership in this working group will focus on ensuring that AI developers establish robust internal oversight mechanisms.
The collaborative drafting process will continue through April 2025, with input from across the AI development ecosystem. By the time the final draft of the General-Purpose AI Code of Practice is presented, it will represent the combined efforts of leading experts from various fields, including computer science, law, and policy.
Need Help?
If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.