Status December 2024
In addition to the presentation on this website, you can also download the AI policy as a PDF. With this policy, the University of Freiburg would like to raise awareness for the conscious use of generative AI in the research context and provide recommendations on how to use the corresponding tools. The Central Data Facility supports all researchers of the University of Freiburg with the implementation of university policies.
The application of generative artificial intelligence (genAI) in research offers many opportunities, but these must be weighed against risks.
The University of Freiburg developed this policy in accordance with its mission statement and the structural and development plan “Shaping Digital Transformation” (2023, 1.), based on the “Living guidelines on the responsible use of generative AI in research” of the EU Commission” (2024, 2.) and the Guidelines of the German Research Foundation (2023, 3.) for dealing with generative models for text and image generation. This document is based on the issues discussed during the event “ChatGPT – AI Tools in Research” (2023, 4.).
A responsible and informed approach to genAI aims to prevent misuse and to ensure a sensible use of AI in all research processes.
With this policy, we aim to raise awareness for a conscious use of genAI in research contexts and provide recommendations for the use of corresponding tools.
Guidelines for the use of AI-based tools in study and teaching, as well as in administration, are currently in development.
This policy must be understood as a snapshot since the development of AI is currently difficult to assess and changes will likely become necessary. Therefore, this document will be updated regularly.
The term “generative AI” describes AI systems that can autonomously create novel content in response to prompts. These systems learn from data received during their training phase and use this information to generate outputs. The basis for this are so-called Large Language Models (LLMs), which are created using very large datasets and highly complex calculations.
The use of genAI in text and data processing provides the ability to quickly and conveniently create texts, images, videos, and code. Chatbots based on generative language models, such as ChatGPT or You.com, excel at generating texts with a wide range of applications from literature reviews to generating programming code or debugging it. AI-based annotation of audio/video recordings or simultaneous translations are already widely used. Extracting data from (multiple) publications and summarizing them is very easy and quick with LLMs. As a translation tool, LLMs potentially provide equal opportunities for researchers who are less proficient in the language used.
However, risks exist as genAI is not neutral and may contain (derogatory) discrimination based on training data (so-called “biases”) or provide plausible but incorrect answers that do not correspond to facts (so-called “hallucinations”). This can promote the creation and dissemination of misinformation. Moreover, scientific misconduct, such as failing to disclose the use of genAI, poses a risk of misuse, as does the potential infringement of intellectual property. The linguistic quality of the generated results is often so good that the outcomes are hardly distinguishable from those created by humans. Additional risks for research may arise from the lack of transparency of tools, fees for accessing services, the use of unknown input data, or the concentration on a few providers.
Researchers should disclose which AI systems they have used and clearly label their contributions.
Researchers remain responsible for the accuracy of the generated data, texts, and results, even if they are AI-generated or AI-based.
It must be indicated whether data and the results derived from them are AI-generated. The origin and use of the utilized methods, AI systems, and data sources must be fully documented, ideally in a way that the results can be reproduced. Yet, for solutions that are only accessible via API (Application Programming Interface) (such as ChatGPT), reproducibility is not always guaranteed, as the service changes over time, and the stochastic nature of LLMs can lead to differing outputs.
Researchers must ensure that their results are scientifically grounded and must consider and, if necessary, avoid potential harm scenarios. The use of genAI also carries the risk of biases that can lead to discriminatory decisions. In relation to the training data used, systematic bias that stems from societal conditions, where certain prejudices are inherent in the data, must be avoided. Bias during the data collection or data annotation process, or in the choice of algorithm or system design, should be minimized.
The Committee on Publication Ethics (2023, 5.) states clearly that AI systems cannot be authors in scientific publications, among other reasons because they are not legal persons. The use of AI tools must be transparently listed in the methods section of publications. Only natural persons can appear as authors in scientific publications. They must ensure that the use of generative models does not infringe on anyone else’s intellectual property and that no scientific misconduct, such as plagiarism, occurs. Reference is made here to the regulations of the Albert Ludwig University for ensuring academic integrity (2022, 6.).
The potential for generating texts, e.g., as templates for modifications by authors, can be used as long as this is documented and the authors take responsibility for the content, results (such as codes), and references. In publications, the rights of publishers or copyright holders must be respected, e.g., when data is provided to commercial vendors. It is critically important to assess whether the sharing of data can be free of charge if it involves a commercial provider.
The use of generative models in applications to funding by the DFG is evaluated by the DFG itself as neither positive nor negative. However, when preparing reviews, the DFG (2023, 3.) prohibits the use of generative models with regard to the confidentiality of the review process. Documents submitted for review are confidential and must not be used as input for generative models. See also the section “Authorship in Publications Related to AI” in this policy.
The responsible use and further development of generative AI should consider the limits of the technology, its environmental impact, and its societal effects. Based on liberal democratic values, discrimination must be avoided in the development and use of genAI, diversity and fairness must be ensured, and harm to persons, lives, and the environment must be averted. Last but not least, considerations about energy consumption by the use of AI systems, which require significant computational resources particularly for training, should be taken into account (see examples here).
Researchers should consider ethical principles when using genAI. They must be aware of the risk of abuse even when using or developing AI tools. This includes abuse scenarios for unlawful military use or use for criminal activities (Dual Use Research of Concern). In doubtful cases, especially in instances with a high risk of abuse, researchers should consult the Committee for Responsibility in Research (Kommission für Verantwortung in der Forschung, KVF), which can discuss such issues in an interdisciplinary setting and advise researchers. See also here: Information on Good Research Practice at the University of Freiburg.
Compliance with data protection regulations and the protection of intellectual property rights through responsible and appropriate handling of personal data must be guaranteed. In addition, regulations on export control law must be adhered to when publishing or conducting research across borders, and relevant medical device regulations must be observed if applicable.
The University of Freiburg provides its members with GDPR-compliant access to genAI, see here. The page also contains examples of energy consumption related to genAI.
The Commission and the countries of the European Research Area and stakeholders have jointly presented a number of guidelines to support the European research community in the responsible use of generative artificial intelligence (AI) (2.).
The “EU Artificial Intelligence Act” (EU AI Act, 2024, 7.) addresses the rapid technological development of all AI systems and incorporates this into its legal framework. The EU AI Act aims to create reliable conditions for the research and application of AI.
Five principles are intended to further secure human responsibility in research, as now called for by an interdisciplinary working group with members from politics, industry, and science in the Proceedings of the National Academies of Sciences (PNAS) (2024, 8.).
As a basis for the section “Recommendations for Researchers at the University of Freiburg,” individual text blocks were generated on July 9, 2024, by ChatGPT (GPT-4o)
Comments from Senate members were integrated into this version following the resolution on December 18, 2024.