What is ChatGPT? Figure 1 is the response I received from ChatGPT when I asked it that question. Actually, this was the third response generated by ChatGPT. I kept asking it to “regenerate response,” and this is the one I thought would be best understood by a broad audience.
My answer to “What is ChatGPT?” is that ChatGPT is a trending artificial intelligence (AI) tool released on November 30, 2022,1 by OpenAI, an AI research and deployment company based in the United States. The tool uses a large language model to make statistical predictions of patterns in vast amounts of data that can be used to generate text-based outputs such as answers to questions or generation of abstracts or essays by mimicking patterns it has learned.
ChatGPT is just one of an increasing number of AI tools seen in nursing practice, academia, and research, and its use is rapidly increasing throughout the health care industry and beyond. Last year, this column examined the different types of AI and discussed an issue with an AI-enabled tool intended to predict sepsis in Epic electronic health records (EHRs).2 ChatGPT is a type of AI known as generative AI, which uses algorithms to create new content. In the case of ChatGPT, that new content is in the form of text.
ChatGPT can also generate incorrect information, biased content, and harmful instructions, hence when using new AI tools such as ChatGPT, it’s wise to remember the Russian proverb—cited by President Reagan during negotiations with the former Soviet Union about nuclear disarmament—“Trust, but verify.”
The purpose of this column of Technology Today is to inform readers about ChatGPT by reviewing the literature illustrating how the tool is being studied and used in health care, including nursing, as well as using the tool and providing the output. The topic is important, because evidence is mounting not only for what ChatGPT can do for the profession but also for the limitations and potential risks to nursing practice, patients, publications, and research.
Current Literature on ChatGPT
I examined the literature about ChatGPT as it pertained to nursing, focusing on authorship, clinical practice, and research. As you read about this AI-enabled tool, keep in mind that AI involves machines performing tasks that normally require human intelligence, in this case, nurses.
Authorship
ChatGPT is not only being used in student papers but also in scientific literature. The most important thing for students, authors, faculty, reviewers, editors, and publishers to remember about using ChatGPT is that it cannot reliably distinguish between fact and fiction. As a result, it can produce inaccurate information that requires expert knowledge to identify.
The challenge for writers who use ChatGPT as a source is that publishers and professors require authors to be responsible for the content and integrity of papers. ChatGPT is unable to do this, and authors must be knowledgeable about the topic that ChatGPT is generating content about in order to do so. Nonetheless, you will see ChatGPT being identified on occasion as an author in publications, although some editors when contacted call that an error.3
An editorial in a recent nursing publication listed ChatGPT as a coauthor. ChatGPT authorship was not condoned by the journal’s publisher but was missed because editorials were handled by the journal’s editor-in-chief and not through the usual manuscript submission process.4 The publisher issued a corrigendum but not before the erroneous authorship was noted by other journals and seen by many readers. Two articles published in the same journal provided critiques of that ChatGPT-coauthored editorial.5,6
As a machine and not a human, ChatGPT cannot agree to be an author or coauthor. As such, ChatGPT content should not be attributed or included in a reference list but should be identified within a manuscript or paper as content derived from using the tool. I have attempted to do that in this article.
Guidance
AI-enabled tools are here to stay. To discourage the undesired use of these tools, universities should include AI tools in their academic integrity policies, specifying what is and is not acceptable use, suspicious use protocols, and consequences of not following policy. Current detection software is too often unreliable. Signs that a student is using ChatGPT for generating a course paper may include content that is redundant or illogical as well as content that is different from the student’s previous work. Suspicion may also be raised by papers suddenly written at a higher level, incongruent with how a student typically communicates either verbally or in writing, but unfortunately, this is also an unreliable predictor of using ChatGPT.
When it comes to publications, who is accountable when content generated by ChatGPT is knowingly or unknowingly published? Is it the author, the reviewers, the editor, or the publisher? Starting with guidelines and sharing ongoing authorship issues and solutions via publications, meetings, and conferences can help keep guidelines current and may prove useful in these uncharted waters.
JAMA and JAMA Network offer guidance on content created by AI in their updated instructions for authors.7 The journals discourage submissions and publication of content created by AI, language models, machine learning, or similar technologies, stating that if these models or tools are used to create content or assist in writing or manuscript preparation, it should be reported as an acknowledgment. JAMA and JAMA Network do, however, support the use of these technologies in formal research design and methods, as long as this is accompanied by clear descriptions in the manuscripts of generated content as well as the name of the model or tool, version and extension numbers, and manufacturer. In short, JAMA and JAMA Network state the responsibility for the integrity of content generated using these models and tools resides with the authors.
The International Committee of Medical Journal Editors (ICMJE) updated their recommendations in May 2023 to include work conducted with the use of AI-assisted technology.8 The ICMJE recommends that journals require authors to disclose the use of AI-assisted technologies upon submission, describing in both the cover letter and manuscript how it was used. They further state that chatbots, such as ChatGPT, should not be listed as authors because the technology is unable to be responsible for the accuracy, integrity, and originality required for authorship. The ICMJE further delineates the role of authors as responsible for any submitted material that used AI-assisted technologies. Authors must carefully review and edit their AI-assisted submissions, because AI can generate authoritative-sounding content that is incorrect, incomplete, or biased. Lastly, authors should be able to attest there is no plagiarism, including text and images produced by AI-assisted technology.
I asked ChatGPT what advice it would give faculty about using ChatGPT (Figure 2), which generated the response that its “ultimate goal was to help students develop critical thinking, research skills, and ethical awareness” when using AI tools.
Clinical Use
Although ChatGPT can generate humanlike conversations in the form of responding to patient questions and generating clinician notes, health care has a very high requirement for patient safety, as well as accuracy and currency of information. ChatGPT can be helpful in that it offers suggestions for specific search terms and scientific databases. It does, however, have important downsides that should be noted.
Researchers asked ChatGPT for information about the psychological impact of COVID-19 in adults with congenital heart disease. In an important limitation not disclosed by ChatGPT, the results were based on older data because ChatGPT is trained up to September 2021.9 The AI tool is further problematic in that it does not provide references to scientific publications on answers it generates. It can generate references on a topic when asked to do so, however these references may not exist. Asking it to regenerate the references on the same topic will result in a new list of references that may include some of the previous references as well as new ones. It is important to note that when asking ChatGPT to generate references on a topic and then asking it to regenerate responses within the same chat, you may receive some responses that contain a list of references and some that contain information but no references.
I asked ChatGPT: “Does ChatGPT provide references to scientific literature?” The generated response included statements such as, “As an AI language model, I don’t have direct access to my training data or know where it came from”; “My purpose is to assist and provide information to the best of my abilities, regardless of the sources of my training data”; “Therefore, it’s always a good idea to consult peer-reviewed scientific literature or trusted sources for specific scientific inquiries or critical research needs.” These generated responses by ChatGPT are important to keep in mind.
Researchers also tested the use of ChatGPT for improving clinical decision support (CDS) in EHRs.10 According to the researchers, the goal was not to show that ChatGPT was superior to humans in generating CDS suggestions but to determine if AI-generated suggestions enhanced human-generated CDS. Five CDS experts examined 36 AI-generated alert suggestions compared with 29 human-generated alert suggestions for 7 different alerts. ChatGPT generated 9 of the 20 highest-scoring suggestions. AI-generated suggestions had lower scores for usefulness (AI, 2.761.4; human, 3.561.3; P < .001) and acceptance (AI, 1.861; human, 2.861.3; P < .001) than human-generated suggestions.
It is important for nurses to understand what ChatGPT is and to know that it doesn’t replace their knowledge. They may encounter patients who derive inaccurate or even harmful information from using ChatGPT. Nurses must be prepared to support and provide sound advice to patients who use fallible AI-generated tools.
I asked ChatGPT if there have been any errors with nurses using ChatGPT in clinical practice (Figure 3). The generated response acknowledged that its output should be taken in the context that the data in ChatGPT predates September 2021. It is unknown if that would be the case in all relevant generated responses. This highlights the importance of knowing the age parameters of data being used in any AI tool.
Research
Researchers asked ChatGPT how it can help cardiovascular nurse researchers and received answers that included text summarization, question answering, data collection, language translation, and writing assistance.11 ChatGPT suggested it could summarize large amounts of text data found in research articles and patient medical records. It could answer questions on best practices, guidelines, and protocols in cardiovascular nursing. ChatGPT also proposed it could generate structured data from unstructured text in EHRs, social media, and other sources.
Some of the proposed ChatGPT research assistance already exists in other forms. For example, several EHRs already have built-in query tools or add-ons. These EHRs also have features that can export data to databases for analysis. Similarly, language translation tools are commonplace and easy to use. Most important, as a language model, ChatGPT cannot analyze numerical data, an option researchers may consider lacking for quantitative research efforts. ChatGPT has other serious downsides for researchers. The information generated can be outdated, which may not be immediately apparent or disclosed by ChatGPT. And the fact that ChatGPT admits to generating inaccurate information cannot be overlooked.
I asked ChatGPT for good topics for nursing research on ChatGPT (Figure 4). Although its suggested topics and descriptions are interesting, we must keep in mind that the data generating them were prior to September 2021. Have these research topics already been sufficiently addressed or are no longer relevant? Are there more important current topics not mentioned that should be considered?
Limitations of ChatGPT
The company website for ChatGPT (https://openai.com/blog/chatgpt) outlines the following limitations that should be considered when using this AI tool:
ChatGPT doesn’t recognize sources of truth and can produce plausible-sounding text that is incorrect.
ChatGPT is sensitive to how the user inputs a question. If the user changes the question to try to get a better response, ChatGPT may claim to not know the answer. If the user slightly rephrases the question, the correct answer may be provided.
ChatGPT is often excessively verbose and overuses certain phrases.
ChatGPT will guess what it thinks the user is asking instead of asking the user for clarification.
ChatGPT can provide harmful instructions or biased information.
Implications of ChatGPT for Nursing
Lastly, I asked ChatGPT what are the most important things nurses should know about using ChatGPT (Figure 5). Note that its answer to this question is generated from data that predates the public availability of ChatGPT, another important reminder to consider the limitations of information generated by using ChatGPT.
Conclusion
ChatGPT was adopted at record speed. For comparison, after its introduction in 2004, Gmail took 5 years to reach 10 million verified users. ChatGPT was released on November 30, 2022, and had 10 million verified users in 2 months. I started writing this column of Technology Today within a few months following the launch of ChatGPT. A mere 3.5 months later, OpenAI announced the release of the successor to ChatGPT, GPT-4 (https://openai.com/gpt-4), on March 13, 2023. GPT-4 can reportedly solve difficult problems with greater accuracy and adds images to its learning capabilities. Unlike ChatGPT, GPT-4 is not free to use. Innovation of AI-enabled tools is rapidly advancing, providing unique opportunities and pitfalls for the nursing profession to discover new and better ways to practice, publish, and research in this technological frontier. We must trust while verifying.
REFERENCES
Footnotes
The author declares no conflicts of interest.