UPDATE — SEPTEMBER 2025:
The European AI Office’s call for evaluators in late 2024 culminated in the December 13, 2024 systemic risk workshop, which gathered researchers, evaluators, and policymakers to test methodologies for assessing high-impact risks from general-purpose AI (GPAI) systems. Discussions centered on categories such as disinformation, cyber offense, and CBRN misuse, with participants urging more standardized approaches. The Office committed to translating those insights into technical guidance for GPAI developers.
By early 2025, the European Commission announced the creation of a Systemic Risk Board under the AI Office, designed to institutionalize cooperation among ENISA, national data protection authorities, and independent experts. This board will steer systemic risk monitoring and help align enforcement practices across the EU. In May 2025, the AI Office released draft technical documentation guidelines outlining how GPAI providers must record risk assessments, adversarial testing, and incident reporting in line with Article 52a of the EU AI Act. These guidelines directly reflected themes from the December workshop, particularly around cyberoffense, democratic integrity, and loss-of-control risks.
The initiative also expanded into research support: in June 2025, the Commission launched Horizon Europe funding calls for projects dedicated to AI safety evaluations, encouraging partnerships between academic labs, SMEs, and regulators. Looking ahead, the AI Office has announced that first compliance checks for GPAI developers will begin in late 2025, with potential enforcement actions, including fines, expected as early as 2026 if firms fail to provide systemic risk documentation.
As of September 2025, the European AI Office has clearly shifted from consultation toward implementation and enforcement. Developers of large AI models that operate in Europe must prepare for their first formal systemic risk reports in 2026, with additional technical consultations scheduled for the end of 2025 to refine methodologies for cyber, disinformation, and democratic-process risks.
ORIGINAL NEWS POST:
European AI Office Seeks Expert Contributions for Workshop on General-Purpose AI Risks
The European AI Office has issued a call for evaluators to join an online workshop focused on assessing systemic risks posed by general-purpose AI models. Set for December 13, 2024, the event aims to improve risk-assessment methods and strengthen collaboration under the EU AI Act.
Purpose of the Workshop
This initiative forms part of the AI Office’s mission to support safe and trustworthy AI development across Europe. The workshop will bring together evaluators, researchers, and officials to discuss best practices, share challenges, and highlight new ideas. Participants will present their methods, showcase recent work, and offer insights to help build a stronger evaluation ecosystem.
Systemic Risks Under Review
During the event, attendees will analyze several categories of systemic risk. These include:
- CBRN Risks: Potential misuse of AI in chemical, biological, radiological, and nuclear threats.
- Cyber Offense: Risks linked to offensive cyber capabilities enabled by AI.
- Major Accidents: Large-scale disruptions or interference with critical infrastructure.
- Loss of Control: Challenges in ensuring oversight and alignment of autonomous AI models.
- Discrimination: The risk of producing biased or unfair outcomes.
- Privacy Infringements: Concerns over data misuse and breaches.
- Disinformation: The spread of harmful or false information through AI-generated content.
- Other Systemic Risks: Broader threats to public health, safety, democratic processes, or fundamental rights.
These topics reflect the EU’s ongoing push to build strong frameworks for identifying, assessing, and reducing high-impact AI risks.
Eligibility and Submission Details
The AI Office invites organizations and research groups that specialize in AI evaluation to submit abstracts of previously published work. Eligible participants must be registered organizations or university-affiliated groups with proven experience. Submissions will be reviewed for technical quality, relevance, and alignment with the Office’s goals.
Key dates include:
- Submission Deadline: December 8, 2024
- Invitation Notification: December 11, 2024
- Workshop Date: December 13, 2024, at 14:00 CET
Role in the EU AI Act
Participants will help shape future evaluation methods used across the EU. Under the EU AI Act, providers of general-purpose AI models must reduce systemic risks, conduct adversarial testing, report incidents, and maintain strong cybersecurity measures. The European AI Office is responsible for enforcing these obligations, investigating identified risks, and issuing fines when required.
This workshop offers experts the chance to influence how Europe approaches systemic AI risks and to contribute to the development of trusted AI systems.
Need Help?
If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.


