UPDATE — SEPTEMBER 2025: Since the Joint Research Centre (JRC) released its policy brief on generative AI in the public sector in early 2024, both the EU and Member States have advanced significantly:
EU-Level Progress
-
EU AI Act Adoption: Finalized in spring 2024, the Act’s obligations are phasing in through 2026. Public sector agencies must document AI-assisted decisions, and foundation models with systemic risk potential face new transparency rules.
-
European AI Office: Established June 2024 to coordinate governance, oversee general-purpose AI, and ensure Member State compliance.
-
Civil Service Training: Beginning in early 2025, civil servants across the EU now have access to AI literacy and bias-awareness courses through the European School of Administration.
Member State Initiatives
-
Germany: Expanded its F13 AI assistant in mid-2025 to multiple ministries, adding bias-detection features.
-
Spain: Instituted mandatory human review for any AI-generated outputs used in legal contexts (late 2024).
-
Finland: Scaled Helsinki’s UrbanistAI in 2025 to generate participatory policy scenarios linked to environmental goals.
-
Bulgaria: Secured EU Digital Europe Programme funding in 2025 for BgGPT, now being piloted in public services and schools.
Policy Shifts and Challenges
-
Sensitive Use Bans: France and the Netherlands temporarily banned generative AI in judicial decision-making pending stronger evaluations.
-
Audit Pilots: Governments began testing audit and oversight frameworks in 2025, aligned with EU AI Act conformity requirements.
-
Open-Source Push: Early 2025 saw €400 million in new EU funding for open-source language models, targeting under-represented European languages.
ORIGINAL NEWS STORY:
European Commission Highlights Generative AI’s Transformative Role in Public Sector
The European Commission’s Joint Research Centre (JRC) has released a comprehensive policy brief analyzing the transformative potential of generative artificial intelligence (AI) in the public sector. The report underscores both the opportunities and challenges posed by generative AI as its integration accelerates across public administration.
The brief reveals that 30% of public managers in the European Union are already leveraging generative AI tools like ChatGPT and Claude for tasks such as drafting documents, summarizing information, and managing data. Another 44% plan to adopt these technologies in the near future. Despite the enthusiasm, 26% of surveyed managers remain hesitant, citing knowledge gaps, confidence issues, and ethical concerns.
Public administrations across Europe are exploring innovative applications for generative AI. Examples include:
-
- Germany: The Baden-Württemberg administration uses the F13 AI text assistance system for tasks like document summarization and research assistance while ensuring data security.
-
- Spain: The Ministry of Justice employs AI tools for legal document summarization, enhancing accessibility and research efficiency.
-
- Finland: Helsinki’s UrbanistAI platform involves citizens in urban planning, creating interactive visualizations based on local laws.
- Bulgaria: The development of BgGPT, an open-source Bulgarian language model, aims to improve accessibility for public and private sectors.
These use cases highlight generative AI’s potential to enhance productivity, accessibility, and decision-making in the public sector.
While the benefits are clear, the report warns of significant risks, including:
-
- Data Privacy and Security: Public servants must navigate challenges in protecting sensitive information.
-
- Bias and Hallucinations: Generative AI systems can produce biased or inaccurate outputs, raising accountability concerns.
- Ethical Implications: The potential misuse of AI in decision-making processes demands stringent governance.
The report emphasizes the importance of integrating human oversight and developing safeguards to address these risks effectively.
The EU AI Act provides a regulatory foundation for generative AI, requiring transparency, ethical compliance, and risk assessments. Complementary guidelines and frameworks from Member States are also emerging, aimed at fostering trust and mitigating risks. Notably, the Netherlands has imposed strict controls, limiting generative AI’s use to experimental purposes unless compliance with privacy laws can be assured.
The JRC stresses the need for continued collaboration among policymakers, public administrators, academia, and the private sector. Strategic initiatives, such as open-source language models tailored to less-represented European languages, are critical for advancing digital sovereignty.
The brief calls for robust training programs to equip civil servants with the skills needed to harness AI effectively. It also advocates for transparent monitoring systems to measure the technology’s impact on service delivery and governance.
Need Help?
If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.


