UPDATE – FEBRUARY 2026:
The European Commission and European Research Area (ERA) stakeholders have continued refining their guidance through a “living guidelines” model designed to evolve alongside generative AI capabilities. In April 2025, the ERA Forum adopted an updated edition that expanded practical recommendations for research institutions, including stronger oversight mechanisms, clearer expectations around tracking AI use in research workflows, and enhanced emphasis on mitigating bias, protecting sensitive data, and managing environmental impacts associated with AI computing.
In October 2025, the European Data Protection Supervisor (EDPS) issued complementary guidance for EU institutions, reinforcing transparency, lawful data processing, and accountability when using generative AI tools in research and administrative contexts. These developments align with the EU AI Act’s broader transparency and governance objectives, particularly as new requirements for general-purpose AI systems continue phasing in through 2026 and beyond.
The guidelines remain non-binding but are increasingly treated as a foundational reference for responsible AI use in European research. Institutions, funders, and researchers are actively integrating these principles into internal policies, ethics reviews, and training programs to ensure research integrity and compliance with evolving European AI governance expectations.
ORIGINAL NEWS STORY:
European Commission Releases Guidelines for Responsible Use of Generative AI in Research
The European Commission, along with the European Research Area countries and stakeholders, has put forth guidelines on the responsible use of generative AI in research. Generative AI refers to AI systems capable of generating new content like text, code, images, audio, etc. It’s based on instructions or prompts provided by the user. The output quality is often so realistic that it can be extremely difficult to distinguish from human-generated content.
Balancing Innovation and Risk
Generative AI offers many benefits. It can accelerate scientific discovery, improve productivity, and enhance research workflows. However, it also comes with serious risks that must be addressed:
- Misinformation and disinformation at scale
- Bias and hallucinations in AI outputs
- Erosion of ethical research practices
- Privacy and intellectual property concerns
To address these concerns, the guidelines outline key principles rooted in the European Code of Conduct for Research Integrity.
Four Core Principles for Responsible Use
- Reliability
Researchers must ensure high-quality research design, methodology, and data use. They should verify all AI outputs for accuracy and actively address known issues like bias and hallucination. - Honesty
Full transparency is essential. Researchers must clearly disclose if and how they used generative AI in their work, including during writing, analysis, or data processing. - Respect
Ethical research includes respecting colleagues, subjects, and the broader public. Researchers must recognize AI’s limitations, avoid reinforcing bias, and handle private or proprietary data with care. - Accountability
Researchers remain responsible for all outputs, regardless of AI involvement. Human oversight must guide AI use at every stage, ensuring ethical decision-making and responsible communication.
Practical Recommendations for Researchers
The guidelines advise researchers to:
- Use generative AI transparently and disclose its role
- Avoid using AI in peer review or other sensitive tasks
- Protect privacy, confidentiality, and intellectual property
- Follow relevant legal standards and citation practices
- Complete training in responsible AI use
Research institutions are encouraged to integrate these guidelines into ethics policies and monitor internal AI development. They should promote safe AI use by offering training and maintaining strong data protection practices.
Role of Funders and Policymakers
Research funders are advised to:
- Support responsible AI use through open and adaptive funding programs
- Require applicants to disclose AI usage in proposals
- Ensure their own use of AI is transparent and ethical
- Stay active in AI policy discussions
- Invest in education and training around ethical AI deployment
While non-binding, these guidelines serve as a common reference point across the EU. They complement existing policy initiatives, including the EU AI Act, and help unify efforts to ensure that research involving AI remains responsible and trustworthy.
Need Help?
If you’re unsure how these EU guidelines—or any other AI regulations—apply to your research or organization, contact BABL AI. Their Audit Experts offer guidance to help you stay compliant and promote ethical AI use in research.

