European Commission Releases Guidelines for Responsible Use of Generative AI in Research

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 03/21/2024
In News

UPDATE – MARCH 2026:

The European Commission and European Research Area (ERA) stakeholders have continued refining their guidance through a “living guidelines” model designed to evolve alongside generative AI capabilities. In April 2025, the ERA Forum adopted an updated edition that expanded practical recommendations for research institutions, including stronger oversight mechanisms, clearer expectations around tracking AI use in research workflows, and enhanced emphasis on mitigating bias, protecting sensitive data, and managing environmental impacts associated with AI computing.

In October 2025, the European Data Protection Supervisor (EDPS) issued complementary guidance for EU institutions, reinforcing transparency, lawful data processing, and accountability when using generative AI tools in research and administrative contexts. These developments align with the EU AI Act’s broader transparency and governance objectives, particularly as new requirements for general-purpose AI systems continue phasing in through 2026 and beyond.

The guidelines remain non-binding but are increasingly treated as a foundational reference for responsible AI use in European research. Institutions, funders, and researchers are actively integrating these principles into internal policies, ethics reviews, and training programs to ensure research integrity and compliance with evolving European AI governance expectations.


ORIGINAL NEWS STORY:

European Commission Releases Guidelines for Responsible Use of Generative AI in Research

The European Commission, along with the European Research Area countries and stakeholders, has put forth guidelines on the responsible use of generative AI in research. Generative AI refers to AI systems capable of generating new content like text, code, images, audio, etc. It’s based on instructions or prompts provided by the user. The output quality is often so realistic that it can be extremely difficult to distinguish from human-generated content.

Balancing Innovation and Risk

Generative AI can accelerate scientific discovery, improve productivity, and support research workflows. However, the technology also introduces risks that must be addressed.

  • Misinformation and disinformation at scale
  • Bias and hallucinations in AI outputs
  • Erosion of ethical research practices
  • Privacy and intellectual property concerns

To address these issues, the guidelines outline principles grounded in the European Code of Conduct for Research Integrity.

Four Core Principles for Responsible Use

  1. Reliability: Researchers should maintain strong research design, methodology, and data practices. AI outputs must be verified for accuracy and checked for bias or hallucinations.
  2. Honesty: Transparency is essential. Researchers should disclose when and how generative AI was used in writing, analysis, or data processing.
  3. Respect: Ethical research requires respect for colleagues, research subjects, and the broader public. Researchers should acknowledge AI limitations, avoid reinforcing bias, and protect sensitive data.
  4. Accountability: Researchers remain responsible for all outputs regardless of AI involvement. Human oversight must guide AI use at every stage.

Practical Recommendations for Researchers

The guidelines recommend that researchers:

  • Use generative AI transparently and disclose its role
  • Avoid using AI in peer review or other sensitive evaluation processes
  • Protect privacy, confidentiality, and intellectual property
  • Follow relevant legal standards and citation practices
  • Complete training in responsible AI use

Research institutions are encouraged to incorporate these practices into ethics policies, governance frameworks, and internal training programs.

Role of Funders and Policymakers

Research funders are advised to:

  • Support responsible AI use through flexible funding programs
  • Require disclosure of AI use in research proposals
  • Ensure their own AI usage remains transparent and accountable
  • Participate in AI governance discussions
  • Invest in training and education on responsible AI

Although the guidelines are not legally binding, they provide a shared reference point across the European research ecosystem. They complement broader EU policy initiatives, including the EU AI Act, and aim to strengthen responsible AI adoption in scientific research.

Need Help?

If you’re unsure how these EU guidelines—or any other AI regulations—apply to your research or organization, contact BABL AI. Their Audit Experts offer guidance to help you stay compliant and promote ethical AI use in research.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter