This article is written based on the latest knowledge on Generative AI. We recommend that readers check back often to receive the most up to date information as technology is quickly evolving.
Introduction
Generative Artificial Intelligence (GenAI) includes tools that can generate texts, images, audios, and videos. GenAI tools such as ChatGPT and Microsoft Copilot often leverage algorithms, including Large Language Models (LLMs), to build from existing and input data to produce outputs. Penn GSE embraces the appropriate use of GenAI for innovation and efforts to improve teaching and learning. This document provides guidance on the potential risks of the use of GenAI at Penn GSE.
Major Risks When Using GenAI
While GenAI has great potential, several risk factors may result in serious consequences in the use of AI tools. We recommend that you check the Guidance on Large Language Models by Penn Information Security & Computing.
Use of Generative AI
Due to its risks, AI should be used with caution, especially in academia. If used appropriately, AI may aid your learning in selected aspects. Below is guidance on the use of AI in the Penn GSE community.
š¬ Transparency
Be transparent about the use of AI in all your work no matter if it was created wholly or partially with an AI tool. If possible, disclose which model was used and how AI was used to create the work product. Individuals are responsible for communicating when you have used generative AI in your original work, regardless of its form (text, image, video etc.). More guidance on copyright issues from the US Copyright Office can be accessed here.
ā Accountability
If AI is allowed in the class, students are accountable for their use of content created by AI and should be wary of misinformation or āhallucinationsā by AI tools (e.g., citations to publications or source materials that do not exist or references that otherwise distort the truth).
š¤ Academic Integrity
All use of AI should follow Pennās Code of Student Conduct and the Code of Academic Integrity. Individual courses may have more narrow guidance on the use of AI, citing AI output and maintaining academic integrity with the use of AI that should be adhered to within the context of the course.
In the absence of other guidance, treat the use of AI as you would treat assistance from another person. For example, this means if it is unacceptable to have another person substantially complete a task like writing an essay, it is also unacceptable to have AI to complete the task.
š„ļø Security and Data Privacy
Having access to data does not mean that you can copy the data to AI tools or use the data to train an AI model. Make sure that you do not leak any confidential information university or organizational wise. For a complete scope of University Confidential Data, read University Data Classification.
Use of University Confidential Data in any third party (non-Penn-approved) generative AI tool is prohibited, whether the tool is a free or charged service. Please be mindful of data security before you use AI tools in your coursework and research activities. We recommend that you consult with your lead researcher or faculty to make sure your use of AI does not violate the university confidentiality policy.
Many tools offer a selection of Service Level Agreements (SLAs) and Data Use Agreements (DUA). Students are recommended and expected to opt out of tracking history and training usage to avoid leaking confidential information. See instructions on how to opt out of history tracking in ChatGPT.