Generative AI in Teaching and Learning at the GSD

Image generated by Adobe Firefly (beta)
This page provides policies, information, and guidance for courses regarding the use of generative AI in teaching and learning at the GSD. Generative AI is artificial intelligence trained on large data sets that can predict letters, words, sounds, images, code, etc., based on the likelihood of those so-called ‘tokens’ occurring together. Note that generative AI is not intelligent and doesn’t think or apply reason as we understand it but produces output that mimics the data it was trained on, flaws included.
Updated 10/24/2024
Information security and data privacy
Information security and data privacy
- Never submit personal information or any information classified by HUIT as Level 2 or higher into publicly available generative AI tools.
- Be vigilant and follow security best practices as AI-driven scams are becoming more sophisticated.
Compliance and copyright
Compliance and copyright
- You are responsible for the accuracy and compliance of your content.
- Depending on the tools and parameters you use, AI-generated content can be inaccurate, misleading, entirely fabricated, or may contain material protected by copyright.
- Do not submit work to which you don’t have rights into a generative AI tool and always be prepared to disclose your usage of any such tool.
Academic integrity
Academic integrity
- The GSD’s academic integrity policy can be found in the Student Handbook and any reference to unauthorized human or non-human support and aids in producing academic work applies equally to generative AI tools.
- As a rule of thumb, wherever it is inappropriate for you to ask a human contributor to do work for you or where you don’t have explicit permission to share the work of one person with another, it is equally inappropriate for you to prompt an AI tool to do work for you or upload the work of others into an AI tool.
- Instructors determine what constitutes appropriate use of generative AI in their courses just like they have the authority to determine what constitutes appropriate use of established technological aids and reliance on human collaboration.
- Students are expected to be familiar with and abide by the School’s standards for academic integrity and conduct, and consult their instructor if they need clarification.
- It is suggested that instructors be proactive and communicate expectations for academic conduct and the use of generative AI tools for their courses (see “Guidance for GSD courses” below).
Generative AI tools available at Harvard
Generative AI tools available at Harvard
- Harvard University IT provides up-to-date information about available AI tools online.
- If you have questions about the risk of using a specific tool or are interested in learning whether HUIT may be able to provide a secure environment for experimenting with a specific tool, please contact [email protected]
Harvard Resources on Generative AI
Harvard Resources on Generative AI
- Harvard’s central information hub around generative AI provides a wealth of information, recommendations, scenarios, and testimonials for different user groups: faculty, students, scholars and researchers, and staff.
- HUIT’s website on generative AI provides up-to-date information about available tools, usage considerations, and related resources.
- The Derek Bok Center’s Resources on Teaching and Artificial Intelligence in Canvas include a wealth of useful information about AI in teaching settings.
- metaLAB (at) Harvard’s Proposed Harvard AI Code of Conduct provides key points and suggestions for the responsible use of AI in alignment with the Harvard College Honor Code.
Non-Harvard Resources on Generative AI
Non-Harvard resources
- theresanaiforthat.com is an AI-generated database that indexes and tracks 15,000+ (and counting) publicly available AI tools across the internet.
- “A Generative AI Primer” by Michael Webb explains generative AI technology and its expected impact on higher education.
- unesco’s quick start guide on “ChatGPT and Artificial Intelligence in higher education” illustrates the challenges and ethical implications of AI.
Policy for the use of AI in courses
Policy for the use of AI in courses
We encourage all instructors to include a policy in course syllabi regarding the use and misuse of generative AI. Whether students in your course are forbidden from using ChatGPT or expected to explore its limits, a policy helps ensure that your expectations for appropriate interaction with generative AI tools are clear to students. Once you decide on a policy, make sure you articulate it clearly for your students, so that they know what is expected of them. Below is sample language you may adopt for your own policy. Feel free to modify it or create your own to suit the needs of your course.
- Restrictive draft policy: We expect that all work students submit for this course will be their own. In instances when collaborative work is assigned, we expect for the assignment to list all team members who participated. We strictly prohibit the use of ChatGPT, AI-based image generators, or any generative artificial intelligence (GAI) tool at any stage of the work process, including initial or preliminary stages. Violations of this policy will be considered academic misconduct. We draw your attention to the fact that different classes at Harvard could implement different AI policies, and it is the student’s responsibility to conform to expectations for each course.
- Fully encouraging draft policy: This course encourages students to explore the use of generative artificial intelligence (GAI) tools such as ChatGPT or an AI-based image generator for all assignments and assessments. Any such use must be appropriately acknowledged and cited. It is each student’s responsibility to assess the validity and applicability of any GAI output that is submitted; you bear the final responsibility. Violations of this policy will be considered academic misconduct. We draw your attention to the fact that different classes at Harvard could implement different AI policies, and it is the student’s responsibility to conform to expectations for each course.
- Mixed draft policy: Certain assignments in this course will permit or even encourage the use of generative artificial intelligence (GAI) tools such as ChatGPT or an AI-based image generator. The default is that such use is disallowed unless otherwise stated. Any such use must be appropriately acknowledged and cited. It is each student’s responsibility to assess the validity and applicability of any GAI output that is submitted; you bear the final responsibility. Violations of this policy will be considered academic misconduct. We draw your attention to the fact that different classes at Harvard could implement different AI policies, and it is the student’s responsibility to conform to expectations for each course.
Common uses for students
Common uses for students
- In alignment with course policy requiring disclosure and excluding certain scenarios, students might want to use AI tools in coursework for tasks such as
- Formulating initial ideas and starting points for research and asking high-level non-specialized questions about their goals
- Proofreading or correcting existing text similar to what is provided by tools such as Grammarly (which relies on AI in its main functionality)
- Gathering references and resources for research, with great caution towards unreliable and fabricated content (sometimes called “hallucinations”)
- Summarizing large datasets that are either publicly available or don’t otherwise violate data privacy policies. An example would be extracting verdicts from hundreds or thousands of publicly available legal cases
- Analyzing existing and non-protected sets of data for correlations or possible patterns
- Generating images with caution towards possible copyright infringement (note that Adobe Firefly is trained exclusively on licensed or freely available content and thus poses no risk regarding copyright)
Common uses for instructors
Common uses for instructors
- Instructors are expected to use great caution in employing AI technologies in teaching and are responsible for ensuring accuracy. However, like traditional internet searches, AI-generated content can provide a useful starting point and inspiration for
- Drafting lesson plans, exercises, or quizzes; note that output will almost always contain problems and requires careful review
- Summarizing, simplifying, or customizing existing material such as lecture notes; when asked to edit text, AI generally won’t introduce new or misleading concepts, but vigilance is key
- Note that while it is inappropriate for instructors to use AI for providing feedback on student work, it may be appropriate to ask students to seek AI-generated feedback on their work as part of a carefully framed assignment (e.g., AI tools may be getting better at auditing a design project for compliance with codes or other specifications)
Covering the cost of AI tools
Covering the cost of AI tools
- Instructors are encouraged to utilize AI tools in their courses that Harvard or the GSD provides on enterprise agreements as they become available. In partnership with HUIT, the GSD is actively exploring how to make desirable tools available at no additional cost to users, and contract negotiations are ongoing
- Instructors may also require students to purchase individual licenses to a specific generative AI tool not provided by Harvard or the GSD and are asked to include information and the expected cost on their syllabus, treating it the same as any other expected expense associated with courses, such as materials for fabrication
- Instructors may not use course budgets to pay for or reimburse students for using AI tools in their courses but may use available research funds to pay for AI tools for their own research and experimentation; contact [email protected] to learn whether IT can offer support or access to a desired tool for your course needs