In the AI workshops I've been running, many instructors have asked for example AI policy statements they can reuse in their syllabi. For most of 2023, I would direct faculty to the fantastic crowdsourced list of AI policies that Lance Eaton organized. As much as I love this resource, it now contains over 100 examples and may be overwhelming for some instructors.
To create a more streamlined list of my favorite policy statements, I made a copy of Lance’s spreadsheet and did the following:
Removed policies that didn't have info in the "rights for reuse" column.
Reviewed all policies that were eligible for reuse and removed any that felt a bit too short or generic to address the key concerns I’m hearing from faculty.
Added a notes column where I documented any unique elements that caught my eye as I reviewed each policy.
Added a column for "permissiveness" so users can quickly find examples that match how restrictive or permissive they’d like their AI policy to be.
Added a "sort order" column and assigned a value of 1 to the policies I thought were the best starting point for beginners. I went back and forth about whether or not to include this column. My goal was to provide busy instructors with a way to sort and view my “top 10” while leaving room for a few additional policy statements that could also be helpful.
You can view my annotated AI syllabus policy list here. If this list still feels a bit overwhelming, I’ve included three of my favorite policy statements at the end of this article.
Tips for an Effective AI Syllabus Policy
Here are the common themes I’ve noticed in the more effective and detailed AI policy statements I’ve seen.
If AI is prohibited or required, explain why.
Connect AI use to something familiar (e.g., getting help from a friend or tutor).
Provide examples of acceptable and/or unacceptable use.
Acknowledge ethical issues such as data privacy, bias, inaccuracy, intellectual property violations, environmental impact, etc.
Note your AI documentation and citation requirements. This might include screenshots, transcripts, documents with “track changes” enabled, APA AI citations guidelines, or MLA AI citation guidelines.
Explain how misuse will be addressed.
Encourage students to ask questions if your policy is unclear.
Permissive Example: Professional Writing Course Policy by Lance Cummings, PhD (UNC-Wilmington)
Hey everyone! I take a unique approach to writing and content creation compared to some other professors. To me, writing happens in a network between people and technology (not just you sitting in front of a computer typing away). AI is now part of this network, whether we want it to be or not.
My view is that AI will impact us no matter what. But we also have the power to shape how AI develops if we engage with it thoughtfully. This class gives you a safe space to creatively experiment with AI without shame, fear, or guilt.
⭐️ I want to be clear: You will not be penalized just for using AI in this course. Unless I say otherwise for a specific assignment, feel free to try out AI writing assistants and generate content with these tools.
I'll share prompts and activities to guide your AI exploration. These are optional - use what's helpful to you! AI is not always the right or best choice for a given writing activity. We'll also discuss our experiences openly as a class to promote mindful AI integration.
While experimenting freely, keep these points in mind:
AI can demonstrate biases and inaccuracies at times. Always validate the content before accepting it.
Be cautious with data privacy. Don't input anything too personal or private. You can't control where it ends up. If you wouldn’t post it on the internet, don’t give it to an AI.
Recognize the limitations. AI doesn't truly comprehend facts or meaning yet. It makes guesses, which means it can confidently provide false information. AI content may initially seem impressive, but usually is not as good as you think it is. I call these AI goggles. Take care whenever using AI-generated text.
⭐️ Also keep in mind that my AI-forward policy only applies to this class. Other professors likely have different rules. Using AI without permission could violate academic integrity policies. So always check the specific guidelines for each class first!
Let's explore AI as a creative tool to augment our skills, not replace them. I'm excited to see what we can discover together! Let me know if you ever have any other questions.
What’s unique about this policy?
Sets a friendly, welcoming tone. That part about the class providing a safe space to experiment with AI without shame, fear, or guilt? Chef’s kiss! Personally, I love a syllabus policy that includes exclamation marks and emojis, but if you’re not a fan of exclamation marks in professional communication, that’s ok. You do you! 😉
Addresses AI’s projected growth head on. The “whether we like it or not” wording might feel a bit too fatalistic for some, but I appreciate that the language is clear and direct.
Shows vulnerability. The instructor is learning, too, and shows this by saying, “I’m excited to see what we can discover together.” (Emphasis mine.)
Doesn’t speak for other instructors. The policy reminds students that other instructors may not be as permissive.
You can learn more about Dr. Cummings’ work at iSophist.com.
Moderately Permissive Example: Computer Science Course Policy by David Joyner, PhD (Georgia Tech)
We treat AI-based assistance, such as ChatGPT and Github Copilot, the same way we treat collaboration with other people: you are welcome to talk about your ideas and work with other people, both inside and outside the class, as well as with AI-based assistants. However, all work you submit must be your own. You should never include in your assignment anything that was not written directly by you without proper citation (including quotation marks and in-line citation for direct quotes). Including anything you did not write in your assignment without proper citation will be treated as an academic misconduct case.
If you are unsure where the line is between collaborating with AI and copying from AI, we recommend the following heuristics:
Never hit “Copy” within your conversation with an AI assistant. You can copy your own work into your conversation, but do not copy anything from the conversation back into your assignment. Instead, use your interaction with the AI assistant as a learning experience, then let your assignment reflect your improved understanding.
Do not have your assignment and the AI agent itself open on your device at the same time. Similar to above, use your conversation with the AI as a learning experience, then close the interaction down, open your assignment, and let your assignment reflect your revised knowledge.
This heuristic includes avoiding using AI assistants that are directly integrated into your composition environment: just as you should not let a classmate write content or code directly into your submission, so also you should avoid using tools that directly add content to your submission. Deviating from these heuristics does not automatically qualify as academic misconduct; however, following these heuristics essentially guarantees your collaboration will not cross the line into misconduct.
What’s unique about this policy?
Compares AI use to collaborating with a person. This makes it easier for students to imagine what might constitute an academic integrity violation.
Provides practical guidelines to avoid misuse. Explicit examples are always helpful, and you can’t be much more explicit than, “Don’t copy text directly from an AI tool,” and, “Close the AI window/app when you’re actively writing content for your assignment.”
Want to see the policy in context? View the course syllabus for OMS CS7637: Knowledge-Based AI.
Restrictive Example: Biology/Scientific Communication Course Policy by Ann Davis, PhD (Texas Woman’s University)
All assignments in this course are individual assignments. In this class, you will often be discussing course concepts with your classmates and with me, but when you sit down to complete a quiz, write a discussion post, or work on a project, I expect you to do the actual work independently. This is the only way that I will be able to tell what you have learned.
You may not use non-TWU “tutoring services” such as Chegg or Course Hero for this course. Paying someone else to do your classwork is the opposite of learning.
You may not use artificial intelligence tools to complete your assignments in this course.
Your major projects in this course are open-book and open-note. However, plagiarism from any source is prohibited, both by university policy and by federal law. Any written assignments, including quizzes, projects, and discussion posts, must be your own, original work. You cannot directly copy word-for-word from any source, including a textbook, even if you provide a citation. Copying someone else’s words denies credit to the original author, and it also robs you of the opportunity to deepen your understanding by putting things in your own words.
We will be using the Turnitin tool on many assignments in this course as a way to teach you to identify and avoid plagiarism. You will be able to see your similarity report as soon as you submit an assignment. If you notice that you have accidentally committed plagiarism, you should rewrite your assignment and resubmit it. If I notice that you have accidentally plagiarized, I will contact you and ask you to rewrite and resubmit, and I will not grade your assignment until I receive your new submission.
What’s unique about this policy?
Provides a rationale. The instructor helps students understand why AI use is prohibited.
Goes beyond AI and mentions other prohibited services. Some instructors worry that mentioning services like Chegg or Course Hero could make matters worse if students haven’t already heard of these tools. However, by calling out these services by name, the instructor makes it clear they’re aware of the myriad ways students might be tempted to take shortcuts.
Includes transparency about use of AI detection software. The instructor states that students will be able to see their Turnitin similarity report. While numerous institutions have disabled Turnitin’s AI detection feature out of concerns about potential bias, inaccuracy, and the negative impact of false positives, many instructors currently rely on it. Ideally, these instructors should treat a high similarity score as a starting point for a discussion with the student rather than an immediate assumption of guilt. Providing students with access to their reports and encouraging them to discuss concerns about false positives may help mitigate some of the potential harm caused by AI detection tools.
Gives students a second chance. By using the phrase “If you notice that you accidentally committed plagiarism,” and giving students a chance to resubmit, the instructor reduces some of the anxiety students may feel about the use of an AI detection tool.
Love that this is a living document, of sorts. The world keeps changing and we need to keep up with those changes.
This is awesome Daniel! I love what you've done with the collection!
I don't want to take from this but it makes me wonder if this could be another sheet on the crowdsource doc itself to maximize exposure....just a thought!