Academic Integrity and Generative AI

With recent developments in generative AI capabilities, it is reasonable to be concerned that your students may abuse or misuse these tools to plagiarize, cheat or otherwise violate your standards for academic integrity. However, there are many strategies you can use to establish clear guidelines for generative AI use with your students to ensure that your students use these tools appropriately or understand that they may not use these tools at all. On this page, we describe the university’s policies on generative AI and academic integrity and also provide some strategies for discussing these topics with your students.


What is the university's policy on the use of generative AI?

The university empowers instructors to establish their own policies on the use of generative AI in the classroom, within the guidelines of the Honor Code. It is your responsibility to determine how or if you allow your students to use these tools in your class. If you choose to allow your students to use generative AI tools, it is your responsibility to clearly communicate your expectations for how they engage with these tools. Likewise, even if you choose to completely prohibit the use of generative AI tools, you should state this policy clearly to your students. If you do not provide a statement on the use of generative AI tools, then the university permits students to use generative AI tools, but they must disclose all generative AI usage. For the university’s official policy on the use of generative AI, please visit this page on Academic Affairs Guidance for Artificial Intelligence (AI) Tools.


How can I address generative AI and academic integrity with my students?

Because generative AI policies are up to the individual instructor, it is your responsibility to clearly and thoroughly communicate your expectations to your students, regardless of whether or not you choose to allow the use of generative AI in your course. Including a statement in your syllabus can be a helpful way to clearly state your expectations. When drafting a syllabus statement on the use of generative AI, you may want to consider and address the following questions:

  • Will I allow the use of generative AI in my course?
  • What are my expectations for students who use generative AI? For what purposes can students use these tools Can they use them for brainstorming? Proofreading? Composing text?
  • How do I define appropriate and ethical usage for generative AI? What are my parameters?
  • What constitutes academic misconduct within my course with respect to the use of generative AI?
  • How will I ensure students are aware of and respect any applicable confidentiality and privacy policies?
  • How will I require students to disclose and/or cite their use of generative AI? Will I create my own guidelines or have them follow APA citation guidelines or MLA citation guidelines?
  • How will I ensure my students understand their responsibility for AI-generated content?
  • Are there assignments where my expectations differ from the guidelines presented at the beginning of the semester? How will I convey this to students?

For more syllabus ideas, see this handout on Sample Syllabi Statements for Generative AI and ChatGPT Usage.

In addition to including a statement on the use of generative AI in your syllabus, you might also consider having a conversation with your students about generative AI tools at the beginning of the semester and throughout the semester, as needed, to ensure clarity. For additional strategies for using generative AI in the classroom, check out our resource on Incorporating Generative AI in the Classroom.


What are some proactive strategies that ensure my students don't abuse generative AI tools?

Given that many generative AI tools convincingly mimic human speech patterns and create original compositions, it can be difficult to detect the use of these tools. Rather than simply relying on detection strategies, you may want to consider proactively addressing the use of generative AI and minimizing opportunities for the misuse of these tools.

As such, a good starting place is to consider how you might develop a culture of academic integrity in your classroom. Students are more likely to exhibit academic dishonesty if they do not understand the purpose of a policy, are able to rationalize dishonest behavior or perceive that acting dishonestly is the norm among their peers.

The following questions may help you to brainstorm strategies for encouraging students to act with integrity when using generative AI tools:

  • How will I help my students understand the value of academic integrity?
    • You might consider sharing why academic integrity is important, involving students in conversations about integrity or inviting students to discuss how they will contribute to a culture of integrity. You may also remind students Vanderbilt has a unifying Honor Code across all schools that centralizes a commitment to completing academic endeavors ethically. Students often respond positively to the notion of academic integrity as one of the University’s oldest traditions that establishes and advances the credibility of a Vanderbilt degree.
  • How will I help my students understand my generative AI policies and my rationale for these policies?
    • In addition to ensuring that you have clearly stated policies, you might also give students the opportunity to ask questions about both the policy itself and the reasons for the policy.
  • How will I motivate my students to follow my policies?
    • Actions such as setting clear expectations, providing support and scaffolding to complete activities, appropriately challenging students, explaining the relevance of an activity and providing opportunities for growth can help motivate your students.

Additionally, you may want to connect students to resources that can help them succeed academically, including:

Lastly, you might consider modifying existing assignments to discourage the use and abuse of generative AI tools. However, you may also find it helpful to design opportunities for students to develop AI literacy by incorporating generative AI into your teaching.

If you adapt your assignments to minimize opportunities for generative AI misuse, remember to keep in mind diverse student needs and accessibility. As you adapt your activities, consider whether your changes could inadvertently create new hurdles and burdens for students with disabilities. Consider approaching any adaptations you make through the lens of universal design.

While your strategies may vary depending on the goals of your course and the decisions you make regarding the use of generative AI, some of the following strategies may help you limit or mitigate the effects of unauthorized generative AI use:

  • Localize assignments

    Requiring students to use information discussed in class, specific texts and other information that would not be readily available online can ensure that generative AI tools are less likely to generate text that would meet assignment requirements.

  • Incorporate personal reflections

    Asking students to reflect on their personal experiences and opinions can help distinguish AI-generated text, because the tone of artificial intelligence tools, when attempting to replicate human reflection, is frequently bland and lacking in detail.

  • Scaffold assignments

    Breaking an assignment down into several smaller tasks that build up to a final project can minimize the likelihood of students turning to generative AI programs. Incorporating prompts that ask students to reflect on the changes they make to their work during each stage of a project can also help discourage the use of generative AI.

  • Use exploratory and hands-on activities

    Breaking an assignment down into several smaller tasks that build up to a final project can minimize the likelihood of students turning to generative AI programs. Incorporating prompts that ask students to reflect on the changes they make to their work during each stage of a project can also help discourage the use of generative AI.

  • Require analysis and critical thinking

    While generative AI tools are skilled at paraphrasing and summarizing information, they are often less adept at activities that require analysis and critical thinking. While these programs may produce coherent outputs to analytical and critical thinking prompts, these outputs often lack the depth and originality of human-generated responses.

  • Provide opportunities for in-class activities and assignments

    For activities or assignments where students may be tempted to use generative AI, consider having students complete these tasks in class.


How do I detect the use of generative AI?

There are currently no foolproof methods for detecting the use of generative AI. However, there are certain red flags that can indicate that a student may have used generative AI when completing an assignment. If you suspect the unauthorized use of generative AI, consider taking one or more of the following steps:

  • Look for inaccuracies. Because generative AI tools frequently generate inaccurate information, a text with multiple errors or flagrant inaccuracies may be a sign that a student has used generative AI.
  • Notice the incorporation of course teachings. If the student’s approach to the assignment diverges significantly from the approach presented in class without citing external sources of information, this may be an indicator of generative AI writing.
  • Review tone. AI-generated writing is often very formulaic and may lack the emotion and depth of human-generated text. If a text reads as though it were written by a machine, this is a possible red flag.
  • Compare to previous work. A major change in tone and style from previous writings could indicate that a student has used generative AI.
  • Assess citations. Generative AI tools may cite journal articles, book chapters, and other academic sources that are not readily available to students. If a journal article, book chapter, or another source is not available through the Vanderbilt Libraries system, this may be a sign the source was identified by a generative AI writing tool.
  • Check for fake or dead-end links. When asked to generate citations, generative AI tools often create fake URLs. Inclusion of several fake or dead-end links could be a red flag.

While a single inaccuracy or a slight change in style may not necessarily mean that a student has used generative AI, repeated or egregious instances of the aforementioned red flags may indicate that a student has used generative AI tools.

  • Can plagiarism checkers accurately identify generative AI use?

    Based on current technological capabilities, traditional plagiarism checkers are not reliable sources of generative AI detection and publicly available AI detection tools may compromise student privacy protections. These tools have several flaws such as:

    • Privacy issues. Vanderbilt currently does not offer any institutionally supported AI detection tools, and many third-party detection tools have unknown privacy and data usage policies. This means that entering student data into these tools may violate student privacy protections such as FERPA.
    • Lack of public-facing, detailed methodologies and standards for detecting AI-generated text. For example, Turnitin, to date gives no detailed information as to how it determines if a piece of writing is AI-generated or not. The most they have said is that their tool looks for patterns common in AI-generated writing, but they do not explain or define what those patterns are.
    • Bias against non-native English speakers. Early studies suggest that generative AI detectors are more likely to label text written by non-native English speakers as AI-written.

    For more information on the current limitations of AI detection tools, please see our Guidance on AI Detection and Why We’re Disabling Turnitin’s AI Detector.

  • Can generative AI tools like ChatGPT detect plagiarism?

    Generative AI tools such as ChatGPT cannot reliably detect unauthorized AI use. Using generative AI in this manner is unsafe because:

    1. Generative AI tools like ChatGPT are designed to predict the next most likely word or phrase and are not designed to detect AI-generated writing.
    2. Many generative AI Tools have ambiguous privacy policies and may not protect student information according to FERPA standards.

    Additionally, strategies such as comparing a student's work to an AI-generated response to the assignment prompt does not yield reliable results due to the variability of AI-generated outputs. Because these tools produce unique and variable responses, they are highly unlikely to create outputs that match a student’s work verbatim.


What do I do if I suspect unauthorized use of generative AI?

The Faculty Guide to the Honor System provides an overview of how to address suspected violations of the Honor Code. Depending on the strength of the suspicion, options include:

  1. Issuing a warning
  2. Reporting to the respective honor council
    1. Suspected violations of the Honor Code involving undergraduate students can be reported here
    2. Procedures for reporting suspicions of Graduate and Professional Student academic misconduct can be found here

Please note that current guidelines indicate that a report to the Undergraduate Honor Council cannot be based solely on an artificial intelligence detector score. Additionally, reports of unauthorized AI use to the Undergraduate Honor Council must meet the following standards:

  1. The allegation is aligned with Vanderbilt Academic Affairs’ Faculty Policy on AI and the Center for Teaching’s guidance on artificial intelligence detection
  2. Is supported by factors indicating potential unauthorized aid usage including (but not limited to)
    1. Content inconsistent with assignment instructions
    2. Fake/dead-end links
    3. Inconsistencies between the voice and writing style of prior submissions

What happens after I report suspected unauthorized use of generative AI to the Undergraduate Honor Council?

If you choose to make a report, the Honor Council will ultimately ask the question “is it more likely than not that this student gave and/or received unauthorized aid in some form on the assignment?” The Undergraduate Honor Council does not have to determine if AI usage occurred, but if it is more likely than not that unauthorized aid was given or received.

When determining whether an Honor Code violation occurred, the Honor Council will consider factors including (but not limited to):

  • Assignment instructions
  • Syllabus policies
  • Comparisons between the assignment submission in question and the alleged student’s full body of work in the course
  • Instructor analysis of the assignment submission for known indicators of artificial intelligence
  • Student and instructor testimony

The honor council generally will not consider the following factors:

  • AI detection scores
  • Other students’ work in the class (unless there is an allegation of collaboration)
  • The alleged student’s work completed for other courses

If found guilty, the panel will assign penalty based on the following factors:

  • Flagrancy
  • Premeditation
  • Truthfulness

  1. We consulted the following works in the development of this webpage. For additional perspectives on these topics, we encourage you to review the sources linked above.