UPDATE — JULY 2025: This article remains accurate and aligned with the European Commission’s guidance on the responsible use of generative AI in research. These non-binding, evolving guidelines continue to promote transparency, accountability, and research integrity across EU institutions.
ORIGINAL NEWS STORY:
European Commission Releases Guidelines for Responsible Use of Generative AI in Research
The European Commission, along with the European Research Area countries and stakeholders, has put forth guidelines on the responsible use of generative AI in research. Generative AI refers to AI systems capable of generating new content like text, code, images, audio, etc. It’s based on instructions or prompts provided by the user. The output quality is often so realistic that it can be extremely difficult to distinguish from human-generated content.
Balancing Innovation and Risk
Generative AI offers many benefits. It can accelerate scientific discovery, improve productivity, and enhance research workflows. However, it also comes with serious risks that must be addressed:
-
Misinformation and disinformation at scale
-
Bias and hallucinations in AI outputs
-
Erosion of ethical research practices
-
Privacy and intellectual property concerns
To address these concerns, the guidelines outline key principles rooted in the European Code of Conduct for Research Integrity.
Four Core Principles for Responsible Use
-
Reliability
Researchers must ensure high-quality research design, methodology, and data use. They should verify all AI outputs for accuracy and actively address known issues like bias and hallucination. -
Honesty
Full transparency is essential. Researchers must clearly disclose if and how they used generative AI in their work, including during writing, analysis, or data processing. -
Respect
Ethical research includes respecting colleagues, subjects, and the broader public. Researchers must recognize AI’s limitations, avoid reinforcing bias, and handle private or proprietary data with care. -
Accountability
Researchers remain responsible for all outputs, regardless of AI involvement. Human oversight must guide AI use at every stage, ensuring ethical decision-making and responsible communication.
Practical Recommendations for Researchers
The guidelines advise researchers to:
-
Use generative AI transparently and disclose its role
-
Avoid using AI in peer review or other sensitive tasks
-
Protect privacy, confidentiality, and intellectual property
-
Follow relevant legal standards and citation practices
-
Complete training in responsible AI use
Research institutions are encouraged to integrate these guidelines into ethics policies and monitor internal AI development. They should promote safe AI use by offering training and maintaining strong data protection practices.
Role of Funders and Policymakers
Research funders are advised to:
-
Support responsible AI use through open and adaptive funding programs
-
Require applicants to disclose AI usage in proposals
-
Ensure their own use of AI is transparent and ethical
-
Stay active in AI policy discussions
-
Invest in education and training around ethical AI deployment
While non-binding, these guidelines serve as a common reference point across the EU. They complement existing policy initiatives, including the EU AI Act, and help unify efforts to ensure that research involving AI remains responsible and trustworthy.
Need Help?
If you’re unsure how these EU guidelines—or any other AI regulations—apply to your research or organization, contact BABL AI. Their Audit Experts offer guidance to help you stay compliant and promote ethical AI use in research.

