The second draft of the General-Purpose AI Code of Practice, designed to align AI providers with the EU AI Act, was unveiled on December 14, 2024. Developed by a coalition of independent experts and overseen by the EU AI Office, the document incorporates extensive feedback from its inaugural draft and outlines robust guidelines for the governance of general-purpose AI models.
This evolving Code, which aims to harmonize innovation with regulation, addresses a wide range of AI applications, particularly those classified as “general-purpose AI models with systemic risk.” It provides a comprehensive framework for transparency, copyright adherence, risk assessment, and governance to ensure that AI technologies comply with EU standards while safeguarding societal interests.
The second draft builds upon its predecessor by offering clearer commitments, refined measures, and preliminary Key Performance Indicators. The updates aim to enhance the document’s practicality and proportionality, reflecting the input of over 1,000 stakeholders and 354 written submissions received since the first draft.
Among the highlighted changes:
- Transparency and Copyright Compliance: The draft emphasizes the importance of transparent documentation and adherence to copyright laws. It requires AI providers to disclose information about model training data, design specifications, and deployment practices. Providers must also implement copyright policies to prevent the unauthorized use of protected content during model training and operation.
- Risk Management for Systemic Risks: The Code outlines risk assessment protocols for high-impact AI models. It includes provisions for adversarial testing, incident reporting, and cybersecurity safeguards to mitigate potential harms. These measures are tailored to address systemic risks such as large-scale manipulation, loss of human oversight, and technological misuse.
- Stakeholder Engagement and Feedback Integration: The draft reflects inputs from EU Member States, industry leaders, and civil society. Workshops and working group meetings facilitated dialogue on risk mitigation, transparency, and governance challenges. Additionally, the Code aligns with international standards and emerging global best practices.
The Code is positioned as a “future-proof” document, designed to adapt to advancements in AI technology. This iteration introduces a structured approach to balancing innovation with accountability, ensuring proportional obligations based on provider size and model impact. The drafting team acknowledged the dynamic nature of AI and emphasized that the Code would evolve alongside technological and societal developments.
Feedback on the second draft is open until January 15, 2025. A third draft is anticipated in February 2025, incorporating further refinements based on ongoing consultations and workshops. The final version of the Code is slated for completion by May 2, 2025, coinciding with the implementation of new EU rules for general-purpose AI models.
Need Help?
If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.