European AI Office Seeks Expert Contributions for Workshop on General-Purpose AI Risks

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 11/26/2024
In News

UPDATE — SEPTEMBER 2025: The European AI Office’s call for evaluators in late 2024 culminated in the December 13, 2024 systemic risk workshop, which gathered researchers, evaluators, and policymakers to test methodologies for assessing high-impact risks from general-purpose AI (GPAI) systems. Discussions centered on categories such as disinformation, cyber offense, and CBRN misuse, with participants urging more standardized approaches. The Office committed to translating those insights into technical guidance for GPAI developers.

By early 2025, the European Commission announced the creation of a Systemic Risk Board under the AI Office, designed to institutionalize cooperation among ENISA, national data protection authorities, and independent experts. This board will steer systemic risk monitoring and help align enforcement practices across the EU. In May 2025, the AI Office released draft technical documentation guidelines outlining how GPAI providers must record risk assessments, adversarial testing, and incident reporting in line with Article 52a of the EU AI Act. These guidelines directly reflected themes from the December workshop, particularly around cyberoffense, democratic integrity, and loss-of-control risks.

The initiative also expanded into research support: in June 2025, the Commission launched Horizon Europe funding calls for projects dedicated to AI safety evaluations, encouraging partnerships between academic labs, SMEs, and regulators. Looking ahead, the AI Office has announced that first compliance checks for GPAI developers will begin in late 2025, with potential enforcement actions, including fines, expected as early as 2026 if firms fail to provide systemic risk documentation.

As of September 2025, the European AI Office has clearly shifted from consultation toward implementation and enforcement. Developers of large AI models that operate in Europe must prepare for their first formal systemic risk reports in 2026, with additional technical consultations scheduled for the end of 2025 to refine methodologies for cyber, disinformation, and democratic-process risks.

 

ORIGINAL NEWS POST:

 

European AI Office Seeks Expert Contributions for Workshop on General-Purpose AI Risks

 

The European AI Office has issued a call for evaluators to participate in an online workshop dedicated to assessing systemic risks posed by general-purpose AI models. Scheduled for December 13, 2024, the event aims to advance methodologies and foster collaboration under the framework of the EU AI Act.

This initiative is part of the European AI Office’s broader mission to ensure the safety and trustworthiness of AI technologies, particularly those with wide-reaching societal implications.

The workshop will convene leading evaluators, researchers, and officials from the AI Office to discuss best practices, challenges, and innovations in evaluating systemic risks linked to advanced AI systems. Participants will showcase their expertise, present methodologies, and contribute insights to strengthen the evaluation ecosystem under the EU AI Act.

Key systemic risks to be explored during the workshop include:

  • CBRN Risks: Potential misuse of AI in chemical, biological, radiological, and nuclear threats.
  • Cyber Offense: Risks linked to offensive cyber capabilities enabled by AI.
  • Major Accidents: Large-scale disruptions or interference with critical infrastructure.
  • Loss of Control: Challenges in ensuring oversight and alignment of autonomous AI models.
  • Discrimination: The risk of producing biased or unfair outcomes.
  • Privacy Infringements: Concerns over data misuse and breaches.
  • Disinformation: The spread of harmful or false information through AI-generated content.
  • Other Systemic Risks: Broader threats to public health, safety, democratic processes, or fundamental rights.

The event is expected to support the development of robust frameworks for identifying, assessing, and mitigating these risks, contributing to the EU’s leadership in AI governance.

The European AI Office invites organizations and research groups specializing in AI evaluations to submit abstracts of their previously published work. Eligible participants must be registered organizations or university-affiliated groups with demonstrated experience in the field. Submissions will be evaluated based on their technical rigor, relevance, and alignment with the office’s mission.

Key dates include:

  • Submission Deadline: December 8, 2024
  • Invitation Notification: December 11, 2024
  • Workshop Date: December 13, 2024, at 14:00 CET

Participants will have the opportunity to shape the development of AI evaluation methodologies, share insights, and influence the EU’s approach to regulating high-impact AI systems.

Under the EU AI Act, providers of general-purpose AI models are required to mitigate systemic risks, conduct adversarial testing, report incidents, and ensure cybersecurity. The European AI Office is tasked with enforcing these requirements, investigating risks, and imposing fines where necessary.

 

 

Need Help?

 

If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter