>

Tips for Using Generative AI

Generative AI programs like ChatGPT, DALL-E, Bing Chat, and others are powerful tools that offer instructors and researchers a variety of new opportunities.

To harness the power and potential of Artificial Intelligence, you must also consider how and when the capabilities of these tools best meet your needs. Just like you wouldn’t use a calculator to proofread an essay, generative AI tools can’t fix every problem. But when used strategically, these tools can be powerful. On this page, we provide some strategies for addressing the following questions:

  • What are some of the strengths and limitations of generative AI?
  • When and how should I use generative AI?

Perhaps the most important step in using generative AI is to first learn how to appropriately and effectively use these tools, which includes developing skills for effectively writing prompts—also known as prompt engineering. This page offers some initial guidance for using generative AI tools. We also provide additional resources for developing your competencies, including our page on prompt patterns and a free, self-paced course on Prompt Engineering taught by Vanderbilt University Professor of Computer Science Jules White.


What are the strengths and weaknesses of generative AI?

To understand the strengths and weaknesses of generative AI, it is worth keeping in mind how AI works. Generative AI is essentially a very advanced pattern recognition software. Programs like ChatGPT analyze large sets of data and produce unique outputs that mimic the patterns identified in their data. Because of this, generative AI programs are great at recognizing patterns, but they may be less skilled at producing factually-sound output. Additionally, the quality of the output a generative AI program generates depends heavily on the quality of your prompt. When writing prompts, consider how you might capitalize on generative AI’s strengths while minimizing its weaknesses.

The lists below, while not exhaustive, highlights some key strengths and weaknesses of generative AI programs.

Strengths of generative AI

  • Proofreading

    Generative AI programs are skilled at pattern recognition, which makes them excellent proofreading tools. This can be especially helpful if you are writing in a second language or are taking final steps to prepare a piece of writing for publication.

  • Brainstorming

    Generative AI programs can generate questions to help you consider certain ideas more deeply. They may also generate arguments for or against a topic if you are trying to view an issue from multiple perspectives.

  • Generating code

    Coding is one example of the varied formats of output you can get from generative AI. Even if you are inexperienced in coding or web design, generative AI tools are skilled at mimicking the patterns and rules of common coding languages. By using a well-crafted prompt and taking the time to verify the quality of the code, you can quickly write or troubleshoot code in languages such as Python, R and HTML.

  • Generating visuals

    Create images for presentations, handouts, or web pages where you need to add imagery to something to improve its design. Using tools like DALL-E, Stable Diffusion, Midjourney, or many others, you can give the AI tool a text prompt describing what image you want, and then it will generate that image for you.

  • Mimicking and explaining genre

    Because many generative AI programs are skilled at recognizing patterns, this makes them particularly adept at mimicking certain genres, especially those that are more formulaic. They may also be able to describe certain elements that should be present in a given genre of writing.

  • Mimicking tone

    Given the appropriate prompt, generative AI tools can mimic the tone of either a specific author, artist, or a specific body of work. This can be useful when you need to consider questions of audience in a given piece of writing.

Weaknesses of generative AI

  • Factual inaccuracies (i.e. hallucinations)

    Generative AI tools have been known to experience hallucinations, meaning that they provide information that is either entirely or partially fabricated. These tools provide text that they know to be grammatically and semantically correct, but they have no actual understanding of the language they’re using. It’s important to remember that generative AI tools are designed to identify and recreate patterns. These tools have no concept of what is true or accurate.

  • Inherent bias

    Training data used to develop generative AI tools is often primarily based on English text from Western sources. This means that this data inherently demonstrates racial and cultural biases. This issue is also seen in image generation tools, which have been shown to demonstrate biases in race, culture and gender.

  • Privacy and liability
    • Some generative AI programs may not meet the minimum requirements to protect sensitive data or information protected by privacy laws such as HIPPA or FERPA. Additionally, many organizations, including the National Institute of Health, have strict regulations about inputting other people’s research into generative AI software.

    • Sharing your own unpublished research or data into generative AI programs could potentially result in your research and data being used to train generative AI programs. Before using generative AI programs, review all relevant policies and regulations and understand how your input will be used.
  • Citation and authorship

    By nature, generative AI programs may summarize or paraphrase a concept or idea without properly attributing the ideas to their sources.

  • Linguistic and cultural limitations

    Generative AI programs are programmed on large volumes of data. However, these data are not comprehensive and may underrepresent certain languages and cultures. Moreover, these programs may not be adept at distinguishing between reliable, high-quality content and unreliable content.

  • Unintended consequences of use

    Given the newness of generative AI tools, there are a variety of ways that generative AI use may have consequences that users may not anticipate. For example, some scholars have identified dilemmas related to intellectual property, environmental impacts, and the future of work. To make an informed decision related to how you will use generative AI, you might decide to research and consider the impacts of generative AI use from many angles. Resources such as the Vanderbilt Policy Accelerator can serve as a useful starting point for exploring topics that might be of interest.


When and how should I use generative AI?

To reap the full benefits of generative AI, it is worth considering both why you want to use it and how you will use it.

As a starting point, this flow chart¹ can help you determine if generative AI is an appropriate option for you.

Flow chart describing when it may be safe to use ChatGPT with the following possible use cases:  1. If it does not matter if the out is true, it is safe to use ChatGPT.   2.If it matters if the output is true, you have the expertise to verify that the output is accurate, and you are willing to take full responsibility (legal, moral, etc.) for missed inaccuracies, it is possible to use ChatGPT (but be sure to verify each output word and sentence for accuracy and common sense).   3. If it matters if the output is true, you have the expertise to verify that the output is accurate, but you are not willing to take full responsibility for missed inaccuracies, it is unsafe to use ChatGPT.   4. If it matters if the output is true and you do not have the expertise to verify that the output is accurate, it is unsafe to use ChatGPT.

As you begin a project or task that incorporates generative AI, consider building a strategy based on the following questions:

  • What is my purpose and goal in using generative AI? Do you want to make a certain process more efficient? Fill a knowledge gap? Generate new ideas?
  • Who is my audience? What are their attitudes towards the use of generative AI? What expectations do they have about the use of generative AI and citation practices? How can I use these tools to meet the needs and/or expectations of my audience?
  • What kind of information will I be sharing with generative AI? Am I okay with this information potentially being shared or circulated? Am I sharing information that is subject to certain privacy restrictions such as FERPA or HIPPA?
  • What specific information or product do I want to get from generative AI? Do I want to generate brainstorming questions? Do I want it to generate a polished version of a piece of writing?
  • How will I prompt AI in a way that generates the output I want? How might generative AI misunderstand my request? What kinds of biases might the AI generate, and how can I adjust my prompt to mitigate these biases?
  • How will I revise and review the output that AI generates? What kinds of errors or limitations do I want to pay close attention to? How will I distinguish between helpful and unhelpful output?

  1. Flowchart created by AI and Data Policy Lawyer Aleksandr Tiulkanov

  2. Additionally, the following works were consulted in the development of this webpage. For additional perspectives on these topics, we encourage you to review the following sources.