Seal element of the university of freiburg in the shape of a flower

Policy on the Use of Generative AI in Research

Status December 2024

In addition to the presentation on this website, you can also download the AI policy as a PDF. With this policy, the University of Freiburg would like to raise awareness for the conscious use of generative AI in the research context and provide recommendations on how to use the corresponding tools. The Central Data Facility supports all researchers of the University of Freiburg with the implementation of university policies.

Contents

Preface

The application of generative artificial intelligence (genAI) in research offers many opportunities, but these must be weighed against risks.

The University of Freiburg developed this policy in accordance with its mission statement and the structural and development plan “Shaping Digital Transformation” (2023, 1.), based on the “Living guidelines on the responsible use of generative AI in research” of the EU Commission” (2024, 2.) and the Guidelines of the German Research Foundation (2023, 3.) for dealing with generative models for text and image generation. This document is based on the issues discussed during the event “ChatGPT – AI Tools in Research” (2023, 4.).

A responsible and informed approach to genAI aims to prevent misuse and to ensure a sensible use of AI in all research processes.

With this policy, we aim to raise awareness for a conscious use of genAI in research contexts and provide recommendations for the use of corresponding tools.

Guidelines for the use of AI-based tools in study and teaching, as well as in administration, are currently in development.

This policy must be understood as a snapshot since the development of AI is currently difficult to assess and changes will likely become necessary. Therefore, this document will be updated regularly.

Introduction/Background

What is “generative artificial intelligence”?

The term “generative AI” describes AI systems that can autonomously create novel content in response to prompts. These systems learn from data received during their training phase and use this information to generate outputs. The basis for this are so-called Large Language Models (LLMs), which are created using very large datasets and highly complex calculations.

Opportunities and Risks of AI

The use of genAI in text and data processing provides the ability to quickly and conveniently create texts, images, videos, and code. Chatbots based on generative language models, such as ChatGPT or You.com, excel at generating texts with a wide range of applications from literature reviews to generating programming code or debugging it. AI-based annotation of audio/video recordings or simultaneous translations are already widely used. Extracting data from (multiple) publications and summarizing them is very easy and quick with LLMs. As a translation tool, LLMs potentially provide equal opportunities for researchers who are less proficient in the language used.

However, risks exist as genAI is not neutral and may contain (derogatory) discrimination based on training data (so-called “biases”) or provide plausible but incorrect answers that do not correspond to facts (so-called “hallucinations”). This can promote the creation and dissemination of misinformation. Moreover, scientific misconduct, such as failing to disclose the use of genAI, poses a risk of misuse, as does the potential infringement of intellectual property. The linguistic quality of the generated results is often so good that the outcomes are hardly distinguishable from those created by humans. Additional risks for research may arise from the lack of transparency of tools, fees for accessing services, the use of unknown input data, or the concentration on a few providers.

Recommendations for Researchers at the University of Freiburg

Transparency

Researchers should disclose which AI systems they have used and clearly label their contributions.

Responsibility and Accountability

Researchers remain responsible for the accuracy of the generated data, texts, and results, even if they are AI-generated or AI-based.

Labelling and Documentation

It must be indicated whether data and the results derived from them are AI-generated. The origin and use of the utilized methods, AI systems, and data sources must be fully documented, ideally in a way that the results can be reproduced. Yet, for solutions that are only accessible via API (Application Programming Interface) (such as ChatGPT), reproducibility is not always guaranteed, as the service changes over time, and the stochastic nature of LLMs can lead to differing outputs.

Avoiding Bias

Researchers must ensure that their results are scientifically grounded and must consider and, if necessary, avoid potential harm scenarios. The use of genAI also carries the risk of biases that can lead to discriminatory decisions. In relation to the training data used, systematic bias that stems from societal conditions, where certain prejudices are inherent in the data, must be avoided. Bias during the data collection or data annotation process, or in the choice of algorithm or system design, should be minimized.

Authorship in Publications Related to AI

The Committee on Publication Ethics (2023, 5.) states clearly that AI systems cannot be authors in scientific publications, among other reasons because they are not legal persons. The use of AI tools must be transparently listed in the methods section of publications. Only natural persons can appear as authors in scientific publications. They must ensure that the use of generative models does not infringe on anyone else’s intellectual property and that no scientific misconduct, such as plagiarism, occurs. Reference is made here to the regulations of the Albert Ludwig University for ensuring academic integrity (2022, 6.).

The potential for generating texts, e.g., as templates for modifications by authors, can be used as long as this is documented and the authors take responsibility for the content, results (such as codes), and references. In publications, the rights of publishers or copyright holders must be respected, e.g., when data is provided to commercial vendors. It is critically important to assess whether the sharing of data can be free of charge if it involves a commercial provider.

DFG Application Process, Review

The use of generative models in applications to funding by the DFG is evaluated by the DFG itself as neither positive nor negative. However, when preparing reviews, the DFG (2023, 3.) prohibits the use of generative models with regard to the confidentiality of the review process. Documents submitted for review are confidential and must not be used as input for generative models. See also the section “Authorship in Publications Related to AI” in this policy.

Promoting and Supporting the Responsible Use of genAI

The responsible use and further development of generative AI should consider the limits of the technology, its environmental impact, and its societal effects. Based on liberal democratic values, discrimination must be avoided in the development and use of genAI, diversity and fairness must be ensured, and harm to persons, lives, and the environment must be averted. Last but not least, considerations about energy consumption by the use of AI systems, which require significant computational resources particularly for training, should be taken into account (see examples here).

Orientation on Ethical Issues

Researchers should consider ethical principles when using genAI. They must be aware of the risk of abuse even when using or developing AI tools. This includes abuse scenarios for unlawful military use or use for criminal activities (Dual Use Research of Concern). In doubtful cases, especially in instances with a high risk of abuse, researchers should consult the Committee for Responsibility in Research (Kommission für Verantwortung in der Forschung, KVF), which can discuss such issues in an interdisciplinary setting and advise researchers. See also here: Information on Good Research Practice at the University of Freiburg.

Compliance with Legal Requirements

Compliance with data protection regulations and the protection of intellectual property rights through responsible and appropriate handling of personal data must be guaranteed. In addition, regulations on export control law must be adhered to when publishing or conducting research across borders, and relevant medical device regulations must be observed if applicable.

Further Information

AI Tools at the University of Freiburg (intranet access required)

The University of Freiburg provides its members with GDPR-compliant access to genAI, see here. The page also contains examples of energy consumption related to genAI.

Research and Innovation in the EU Regarding AI

The Commission and the countries of the European Research Area and stakeholders have jointly presented a number of guidelines to support the European research community in the responsible use of generative artificial intelligence (AI) (2.).

EU AI Act

The “EU Artificial Intelligence Act” (EU AI Act, 2024, 7.) addresses the rapid technological development of all AI systems and incorporates this into its legal framework. The EU AI Act aims to create reliable conditions for the research and application of AI.

International Conference on Machine Learning

Five principles are intended to further secure human responsibility in research, as now called for by an interdisciplinary working group with members from politics, industry, and science in the Proceedings of the National Academies of Sciences (PNAS) (2024, 8.).

List of Contributors 

  • Prof. Dr. Stefan Rensing, Vice Rector for Research and Innovation
  • Prof. Dr. Stefan Günther, Chief Information Officer
  • Prof. Dr. Frank Hutter, Professorship for Machine Learning
  • Prof. Dr. Silja Vöneky, Professorship for Public Law, International Law, Legal Ethics and Comparative Law, and Chair of the Committee for Responsibility in Research
  • Dr. Marc Herbstritt, Chief Information Security Officer
  • Klaus Scharpf, Data Protection Officer

As a basis for the section “Recommendations for Researchers at the University of Freiburg,” individual text blocks were generated on July 9, 2024, by ChatGPT (GPT-4o)

Comments from Senate members were integrated into this version following the resolution on December 18, 2024.

Contact Persons

Ein Porträt von Michael Rensing

Prof. Dr. Stefan Rensing

Vice Rector for Research and Innovation

References

  1. Jahresbericht der Universität Freiburg 2023, S. 7
    https://uni-freiburg.de/wp-content/uploads/Jahresbericht_der_Universitaet_Freiburg_2023.pdf (aufgerufen am 5.11.2024)
  2. Living guidelines on the responsible use of generative AI in research (März 2024)
    https://research-and-innovation.ec.europa.eu/document/2b6cf7e5-36ac-41cb-aab5-0d32050143dc_en (aufgerufen am 5.11.2024)
  3. Stellungnahme des Präsidiums der Deutschen Forschungsgemeinschaft (DFG) zum Einfluss generativer Modelle für die Text- und Bilderstellung auf die Wissenschaften und das Förderhandeln der DFG (September 2023)
    https://www.dfg.de/resource/blob/289674/ff57cf46c5ca109cb18533b21fba49bd/230921-stellungnahme-praesidium-ki-ai-data.pdf (aufgerufen am 5.11.2024).
  4. ChatGPT – KI-Tools in der Forschung (Oktober 2023)
    https://videoportal.uni-freiburg.de/video/chatgpt-ki-tools-in-der-forschung-englisch-untertitelt/b08e66cd4ec53fba503a7f4028825d1b (aufgerufen am 5.11.2024)
  5. Committee on Publication Ethics (COPE) (2023)
    https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools (aufgerufen am 5.11.2024)
  6. Ordnung der Albert-Ludwigs-Universität zur Sicherung der Redlichkeit in der Wissenschaft, Amtliche Bekanntmachungen Jahrgang 53, Nr. 31, S. 148ff (1. Juni 2022)
    https://uni-freiburg.de/wp-content/uploads/Uni-Freiburg-Ordnung-Redlichkeit-in-der-Wissenschaft-1.pdf (aufgerufen am 5.11.2024)
  7. KI-Gesetz der Europäischen Kommission (2024)
    https://www.europarl.europa.eu/topics/de/article/20230601STO93804/ki-gesetz-erste-regulierung-der-kunstlichen-intelligenz (aufgerufen am 5.11.2024)
  8. Blau, Wolfgang. et al. (2024), Protecting scientific integrity in an age of generative AI, Vol. 121 (22) e2407886121, PNAS.