European Commission Releases Guidelines for Responsible Use of Generative AI in Research

Written by Jeremy Werner

Jeremy is an experienced journalists, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 03/21/2024
In News

The European Commission, along with the European Research Area countries and stakeholders, has put forth guidelines on the responsible use of generative AI in research. Generative AI refers to AI systems capable of generating new content like text, code, images, audio, etc. based on instructions or prompts provided by the user. The output quality from these models is often so realistic that it can be extremely difficult to distinguish from human-generated content.

 

While generative AI provides tremendous opportunities to accelerate scientific discovery and radically improve the effectiveness and pace of research processes, it also introduces significant risks that need to be proactively addressed. Potential risks include the large-scale generation of disinformation and misinformation, the erosion of sound and ethical research practices, inaccuracies stemming from the technology’s current limitations, misuse that could undermine research integrity, and broader societal harms from bias and lack of transparency.

 

The key principles underpinning these guidelines are drawn from the European Code of Conduct for Research Integrity as well as prior work on ethics and trustworthy AI:

 

  • Reliability in ensuring quality research design, methodologies, analyses and resource utilization, including verifying outputs from generative AI for accuracy and addressing issues of bias or hallucinations.

 

  • Honesty in transparently developing, conducting, reviewing, reporting and communicating all research in a fair and impartial manner – including full disclosure on if and how generative AI tools were utilized.

 

  • Respect for colleagues, research participants, subjects, society and the environment – taking into account the limitations of generative AI, its potential negative impacts like entrenching biases, and ensuring the proper handling of private/confidential information and intellectual property.

 

  • Accountability for all stages of the research process and its eventual societal impacts, underpinned by the notion of human agency and oversight to maintain responsibility for AI-generated outputs.

 

Researchers should remain ultimately accountable for all scientific outputs, use generative AI transparently while disclosing its roles and limitations, protect privacy/confidentiality/IP, follow relevant laws, provide proper citation, undertake training on responsible AI use, and avoid deploying it in sensitive activities like peer review. Research organizations should promote and support responsible AI use through guidance and training, actively monitor how the technology is being developed and used internally, reference/integrate these guidelines into research ethics policies, and implement locally-governed AI tools adhering to data protection and security standards.

 

Research funders should design funding instruments that are open to responsible AI use in line with good practices, review their own use of AI transparently, request information from applicants on their AI utilization, stay involved in the rapidly evolving AI landscape, and fund educational programs around ethical AI development and deployment. These guidelines are intended as a non-binding framework to set common directions around upholding research integrity when using generative AI, while allowing flexibility for stakeholders to adapt them to their specific contexts. They complement and build upon the European Union’s broader AI policy initiatives like the AI Act.

If you have questions or concerns about AI guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter