The European Commission has launched a public consultation on draft guidance and a reporting template for serious incidents involving high-risk artificial intelligence systems, a key step in implementing the EU AI Act. The consultation opened September 26 and runs until November 7, 2025, inviting stakeholders to provide feedback and practical examples on how the new regime will interact with existing reporting obligations.
Under Article 73 of the AI Act, providers of high-risk AI systems will be required to report serious incidents to national market surveillance authorities starting in August 2026. The aim is to create an early warning system to identify risks quickly, establish accountability, enable corrective measures, and strengthen public trust in AI technologies .
The draft guidance clarifies what constitutes a “serious incident.” These include cases where an AI system leads to death or serious harm to a person’s health, causes irreversible disruption to critical infrastructure, infringes EU fundamental rights protections, or results in serious harm to property or the environment. Examples range from biased recruitment algorithms excluding candidates based on gender or ethnicity, to AI-driven misdiagnoses in healthcare or systemic disruptions in energy supply .
The guidance also explains reporting timelines. Providers must notify authorities immediately, and no later than 15 days after becoming aware of an incident. Shorter deadlines apply in severe cases, such as within two days for widespread infringements or disruptions to critical infrastructure, and within 10 days if a person’s death is involved .
To streamline compliance, the Commission has published a standard reporting template. The document sets out detailed sections for providers to record administrative details, system categorization, incident descriptions, remedial actions, and preliminary analyses. It mirrors formats in other EU regimes like medical devices and cybersecurity, and aligns with international efforts such as the OECD’s AI Incidents Monitor .
The draft framework also highlights the interplay with existing EU laws, including the NIS2 Directive on cybersecurity, the Critical Entities Resilience Directive, and the Digital Operational Resilience Act for financial entities. In many cases, only fundamental rights violations would trigger additional AI Act reporting, to avoid duplication and excessive bureaucracy .
The Commission stresses that the consultation is not only about definitions and timelines, but also about ensuring consistency across regulatory regimes. Stakeholders are encouraged to comment on overlaps with GDPR breach notifications, cybersecurity obligations, and sector-specific reporting rules.
With the AI Act set to fully apply from August 2026, the Commission’s draft guidance is designed to give providers time to prepare. By establishing a clear reporting process and harmonized template, EU officials hope to balance innovation with accountability, ensuring that Europe’s AI ecosystem develops in a safe, transparent, and trustworthy manner.
The consultation documents and template are available for download on the Commission’s website, and feedback is open until November 7.
Need Help?
If you’re concerned or have questions about how to navigate the global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.