Generative AI and Research

Generative AI programs can be powerful research tools that may help with tasks such as visualizing data, writing software or performing computation, paraphrasing sources, formatting papers, brainstorming, and more. However, if used incorrectly or inappropriately they can have major ramifications for your research. Before using generative AI programs in your research, it can be helpful to first review resources on how these tools work and develop a basic understanding of prompt engineering. It is likewise useful to consider your goals for using generative AI and reflect on any possible pitfalls or ethical issues you may encounter such as questions of academic integrity and plagiarism, authorship, bias, hallucinations, factual inaccuracies and misinformation.

When using generative AI, a good rule of thumb is to use these tools for tasks where any answer is valuable or where the answer can easily be checked for correctness. As generative AI continues to develop, disciplinary norms related to generative AI use will likewise develop and change, so it will be important to stay up to date on all trends and guidance that may impact your work.

Moving your research forward with generative AI

Jesse Spencer-Smith, Chief Data Scientist, Data Science Institute, discusses generative AI and research at Vanderbilt.

When and how can I use generative AI in my research?

The Academic Affairs Guidance for Artificial Intelligence Tools establishes some basic guidelines for AI use, including the following standards:

  • Follow guidelines provided by deans or any other individuals who oversee your work.
  • Disclose AI use appropriately when disclosure is expected.
  • Comply with all laws and university policies (including those related to privacy and confidentiality).
  • Take responsibility for the accuracy and impact of AI use.

Given the rapid developments in generative AI capabilities, along with the differences in disciplinary values and priorities, you must choose how and if you will use generative AI in your research. Before proceeding, however, it is important to thoughtfully consider your situational context and any applicable policies or regulations that may impact your work. Depending on your field, attitudes towards generative AI use will vary substantially. For example, generative AI may likely be integrated into many software platforms and become a major part of future interfaces. In these disciplines where generative AI use is the norm, there may be fewer expectations that you will cite and disclose your use of generative AI. In other fields, however, you may need to cite and disclose all instances of generative AI use, even if it is used in the brainstorming stage or as a part of your research methodology. When writing and publishing your research, you will want to be especially careful as some journals have instituted a blanket ban on any text composed by generative AI, while others permit the inclusion of text authored by generative AI with appropriate citation or disclosure.

Before using generative AI in your research or writing, carefully investigate any policies or norms that influence how you may use this technology. When investigating, try to answer the following questions:

  1. What (if any) standards or norms exist in my field relating to the use of generative AI for writing, research or publication?

    The Association for Computing Machinery, Associated Press, Committee on Publication Ethics, the International Committee of Medical Journal Ethics, the National Institute of Health, the National Science Foundation, and UNESCO have all published guidance on the use of generative AI in writing, research, peer review, and/or grant proposals. Before using generative AI, investigate whether there are any published guidelines or requirements that could apply to your research.
  2. Does the journal I plan to publish with have standards related to the use of generative AI?

    For example, Nature and Science have recently published guidance about when and how generative AI may be used. When researching journal policies, consider investigating the policies of all journals you may potentially publish with. If the journal does not have published policies and standards, you might want to reach out to the journal’s editor to inquire about their policies.

  3. Who is my audience, and how might they respond to my use of generative AI?

    If you know a journal editor or reviewer is likely to be resistant to the use of generative AI, you may choose not to use it at all. Or you may consider reaching out to the journal for guidance before you proceed. Alternatively, you might include a clear, thorough, and convincing explanation and justification of your generative AI use in your submission. Regardless of how you proceed, understanding your audience will help you make a more thoughtful and informed decision.

What are general best practices for using generative AI in my research?

As generative AI use becomes more common, best practices will continue to change and develop. As you navigate integrating generative AI into your research, ask yourself the following questions:

  1. How familiar am I with techniques for using generative AI tools effectively, such as prompt engineering? How will I develop skills and competencies in this arena?
    1. For example, you might check out our Tips for Using Generative AI page, our Prompt Patterns page, and Vanderbilt University Professor of Computer Science Jules White’s free, self-paced course on prompt engineering.
  2. Where will these tools allow me to work more productively or effectively? When am I better off avoiding the use of generative AI?
    1. For example, using generative AI like an internet search with the expectation that it will return a set of facts is unlikely to yield meaningful results and may also be an inefficient use of these tools.
  3. What ethical concerns are on my radar and how will I mitigate any potential issues?
    1. For example, since these tools were trained on human data, they may replicate human biases in the media and content they produce.

When using generative AI, the following suggestions may help you make informed decisions about how you will use generative AI. Keep in mind that journals, grant funding agencies and other stakeholders in your work may have policies related to generative AI use that may influence both your writing and, in some cases, your research practices. These suggestions are based on some of the emerging trends; however, they do not supersede any applicable university or disciplinary guidelines—you should always adhere to and consult any specific policies that may apply to you. As you begin a project, you might consider the following best practices:

  • Be prepared to disclose, document and/or cite any use of generative AI, especially in relation to your writing.
    • Certain publication venues and funding agencies may require you to document or disclose your use of generative AI. This may be specific to AI-generated text or could apply more broadly to any use of generative AI in the research process.
    • If you plan to publish research that involves generative AI use, review relevant policies and norms to see what (if any) disclosure expectations exist.
    • The Modern Language Association and the American Psychological Association, for example, currently offer guidelines for citing your use of generative AI.
    • If you are using generative AI to work on a long-term research project, consider keeping detailed documentation as disclosure norms and practices will likely change over time.
  • Be fully aware of the risks and potential pitfalls of using generative AI.
    • For example, media produced by these tools may demonstrate substantial bias.
    • Because these tools are not infallible sources of truth, they are rarely an effective replacement for an internet source due to the risk of fabricated data and information (i.e. hallucinations).
    • Before using generative AI, educate yourself on the risks and pitfalls that can emerge from using generative AI tools and use this understanding to guide your interactions.
  • Use specific prompts for targeted tasks.
    • Your input to these tools, especially text-based tools, can heavily influence what generative AI tools produce and the quality of their responses.
    • Specific and detailed prompts will help you have a more efficient and positive experience using generative AI.
    • Having clear prompts and goals will help limit (but not eradicate) the likelihood that these programs will generate unhelpful, inaccurate, biased or irrelevant outputs.

    Review Prompt Patterns

  • Be prepared to take full ownership of ideas that emerge from conversations with generative AI.
    • There are many debates currently circulating as to whether generative AI can be considered an author, but many journals and organizations have argued that tools like ChatGPT do not meet the minimum requirements for authorship.
    • As the person making judgements about how you use generative AI, you are responsible for vetting any information or text these programs generate.

What are risks associated with using generative AI in my research, and how can I mitigate those risks?

Given the vast capabilities of generative AI tools, there are a variety of potential pitfalls. Below, we share some guidance on how to use generative AI in a way that mitigates risks germane to your research. When using generative AI tools, it is important to keep in mind that these tools are designed to identify patterns and use these patterns to create probable outputs. For additional guidance on how and when to use generative AI, visit our Tips for Using Generative AI page.

  • Keep in mind that generative AI is designed to be random and will produce different outputs each time you give it a prompt, even if you use the same prompt multiple times.

    This can be both a weakness and a strength of generative AI use. When using generative AI, keep in mind that outputs may vary. Try to use these tools when this variability is unlikely to cause problems or when this randomness is useful.

  • Carefully fact check all outputs, especially when asking generative AI to write text that is longer than your original prompt.

    Generative AI programs can convincingly fabricate data and information (a phenomenon referred to as hallucinations). These programs may also be trained on data that does not include the latest current events. These programs can fail to appropriately distinguish between reliable and unreliable information, leading these sources to paraphrase a particular concept inaccurately or insufficiently. You will want to ensure that you have the expertise to identify any inaccurate information that these programs may generate.

  • Avoid over-relying on data that generative AI was trained on and include all relevant information in your prompts.

    For example, if you want generative AI to summarize, filter, cite or paraphrase information, you will want to provide it with the information you want it to use. Providing the full text of a document that you want summarized will reduce the chance of hallucinations compared to simply asking it to “summarize XYZ paper.” Similarly, you might ask generative AI to provide quotations from the text that you provided to support it. These statements can help you to then check the consistency of its work. Finally, asking generative AI to perform a task multiple times can help to identify variations in the output that are likely due to hallucinations.

  • Understand the risks of accidental plagiarism.

    Because of the nature of many generative AI programs, these programs may summarize or paraphrase a concept or idea without properly attributing the ideas to their sources. Be aware and cautious of this risk when using any ideas or texts generated by these programs.

  • Be wary of using generative AI for identifying relevant scholarhip or creating citations.

    Using generative AI as a replacement for an internet or database search can lead to instances of generative AI programs fabricating citations and scholarly sources. Rather than asking for a list of references, you might consider using generative AI to identify topics or relationships between ideas. Keep in mind, however, that you will need to fact check all information. It is also worth remembering that even when these programs generate accurate citations, they may still favor older and more frequently cited books and articles, overlooking many relevant sources.

  • Avoid inputting priviliged or sensitive information into generative AI programs unless it has been approved.

    If your research involves data protected by privacy laws such as HIPPA or FERPA, or otherwise sensitive information, you may want to avoid inputting this data directly into generative AI programs. Many generative AI programs may not meet the requirements for securely storing privileged information; using generative AI could therefore potentially compromise sensitive information. Even if you "de-identify" data, you will want to be thorough and properly ensure that it can't be re-identified later.

  • Understand the risks of inputting your unpublished research or data into generative AI programs.

    If you plan to input unpublished research or proprietary data, you may want to research the data storage, privacy, and training policies for the generative AI program of your choice. Depending on the generative AI tool you use and the setting you choose, inputting unpublished research or data could potentially result in your research and data being used to train these programs. There exists a chance that ideas from your research could later appear in information generated by these programs without appropriate attribution.

  • Avoid using generative AI to peer review another colleague's work unless you have been given explicit permission to do so.

    Many journals and grant funding organizations currently, including the National Institute of Health, prohibit reviewers from using generative AI programs to assist in the peer review process. In cases where there are no explicit policies written, journals or reviewees might consider the use of generative AI to be a breach of confidentiality. As policies may change over time, before using generative AI, be sure to review the policies of the relevant journal or organization, such as the National Science Foundation or National Institute of Heath.

  • Beware of biased outputs, especially in relation to peer review and content generation.

    In cases where you are permitted to use generative AI to perform peer review, be aware that AI-generated feedback may perpetuate certain biases through both its quantitative and qualitative feedback. When using generative AI to brainstorm ideas for your research or otherwise provide feedback, you will want to craft a prompt that encourages it to include and consider multiple perspectives and minimize bias. Even with careful prompting, these programs may lack the training and cultural competency to approach a problem from every angle, so you will want to carefully review all AI-generated content.

How can generative AI help my research?

When used correctly, generative AI tools can help improve your productivity and enhance the quality of your research. Below, we share some examples of how you might use generative AI in your research. Keep in mind that you should consult all policies and guidelines specific to your field of research before using any of these strategies.

The strategies below represent just a few examples of how you might use generative AI in your research. When brainstorming strategies for how generative AI may aid your research, we recommend asking yourself, “how do the strengths of generative AI intersect with my research needs?”

  • Computing

    Generative AI is particularly skilled at serving as an interface to computing and helping you to perform computational tasks, such as visualization, data analysis, natural language processing, etc. It can help with many computing tasks, including:

    • Generating visualizations, PowerPoint, websites, PDFs
    • Extracting structured data from unstructured data
    • Automating tasks, such as reorganizing files, applying data transformations, and moving data between systems
    • Processing data, cleaning data, and converting it between formats
    • Generating code and executing it on your behalf to solve problems
    • Converting code from one programming language to another
    • Troubleshooting errors and other coding issues

    Additional tips:

    • For coding, cut/paste errors back into the tool to help it fix errors that you made or that it made.
    • For software, use programs that can be cut/pasted into a single prompt.
    • Leverage tools like ChatGPT Advanced Data Analysis that can write code without requiring you to cut/paste or otherwise directly execute code.
  • Brainstorming

    With careful prompting, generative AI can be a useful brainstorming partner. Its potential uses include:

    • Brainstorming potential solutions to problems
    • Suggesting new ideas for exploration
    • Identifying additional topics for consideration
    • Exploring other angles of a topic or problem that you might not have considered

    Additional tips:

    • The more details that you provide, the better the output will be.
    • Provide rules for brainstorming to guide the outputs that the tool generates.
    • Pay close attention to your input and be sure to incorporate interesting and thoughtful ideas and questions.
    • Keep in mind generic questions or ideas are more likely to lead to biased and uninteresting outputs.
  • Sentence Level Feedback and Translation

    If you are polishing a publication for submission, generative AI programs can provide sentence-level feedback on your writing. This may be especially helpful if English is not your first language. In addition to identifying grammatical errors, you might also request feedback on the following:

    • Style
    • Tone
    • Sentence structure
    • Clarity
    • Conciseness

    Additional tips:

    • Generative AI programs can provide feedback on languages other than English, but depending on the language, the feedback may not be as accurate.
    • For additional clarity and precision, request that the program put any changes in bold and provide rationale for those changes.
    • Verify the accuracy of any suggested changes before revising your work.
  • Summarizing, paraphrasing, and understanding information

    With careful prompting, generative AI programs may be able to assist you by paraphrasing or summarizing information. This can be especially helpful in early stages of the research process as you attempt to identify topics that may be useful or relevant to your research. This may also be helpful for understanding complex or jargon-laden texts.

    Additional tips:

    • To limit inaccurate or fabricated information, include the passages that you want paraphrased or summarized in your prompt.
    • Ask the generative AI tool to include quotations from the provided material to support each statement it makes and check those quotations and statements for consistency.
    • Use generative AI as a starting point for reviewing information and be sure to return to the original source if you plan to work closely with it.
    • For more complex topics, request guided reading questions or an outline of main points that you can annotate.
  • Tailoring research to different audiences

    Generative AI can help you identify patterns in your writing and help you to adjust your writing to meet the needs of different types of audiences. Generative AI may be helpful for tasks such as:

    • Identifying keywords
    • Identifying key takeaways for abstracts
    • Meeting strict word limits
    • Simplifying concepts for lay abstracts
    • Tailoring language to a given demographic (such as children) in human subject research
    • Adhering to strict genre conventions

    Additional tips:

    • Consider prompting generative AI to ask you questions and provide suggestions, especially when seeking to ensure clear communication with audiences unfamiliar with your research.
    • Generative AI may be especially helpful when writing in genres that have easily recognizable conventions, given its propensity for recognizing patterns.
    • When receiving feedback related to genre conventions, consider providing models or examples specific to your discipline.
  • Critiquing research

    Generative AI can also be a useful thought partner, due to its ability to generate large volumes of questions in a short period of time. Generative AI may help to provide new perspectives on your work if you’re feeling stuck. It may also be helpful if you want to polish your writing before requesting feedback from a colleague, editor or reviewer.

    Additional tips:

    • If you have specific concerns, integrate those concerns into your prompt.
    • To identify opportunities to strengthen your argument and improve clarity, ask generative AI to identify ambiguities in how you are presenting your work or to identify and challenge your assumptions.
    • Consider asking generative AI to play the role of devil’s advocate to identify places where you may be able to add additional evidence.
    • If you request general feedback and find some responses more useful than others, try using your prompts to indicate the type of feedback that has been most helpful.
  • Formatting

    Because generative AI is particularly skilled at pattern recognition, it can be useful in situations where you must conform to strict constraints such as those found in:

    • Citation manuals
    • Formatting guidelines
    • Style guides

    Additional tips:

    • Provide clear examples or rules before requesting help with formatting.
    • For more complex style guides, consider asking generative AI to create a checklist or prompt you with questions to meet relevant requirements.
    • Consider asking generative AI to help you identify formatting errors or inconsistencies that you may have missed.
  1. We consulted the following works in the development of this webpage. For additional perspectives on these topics, we encourage you to review the following sources.