Artificial intelligence (AI) uses sophisticated algorithms to process large amounts of data, learn from the data, and make predictions or solve problems. Today, critical care nurses in some hospitals use AI to enhance patient monitoring, develop nursing care plans, handle administrative tasks, and improve clinical decision-making.1  In the future, AI will likely provide real-time insights for early detection and diagnosis, facilitate efficient allocation of health care resources, and streamline and automate routine nursing tasks, including documentation.2,3  Although AI can enhance the critical care nursing workflow and the quality of care, AI is a tool and, like all tools, has the potential for both benefit and harm. As experts in the clinical context, critical care nurses are uniquely positioned to use their knowledge to maximize the benefits of health care AI while recognizing and mitigating harm.

Including nurses in the development of AI in health care is essential because nurses possess an intimate understanding of the contextual nature of health care data. This understanding of the context of the data generation process can prevent inaccurate conclusions. For example, a group of researchers developed an AI algorithm to predict risk trajectories in patients with sepsis.4  The researchers concluded that the algorithm’s performance would be even better with earlier and more frequent measurements of laboratory values such as lactate.4  As clinical application and practice experts, nurse consultants would recognize that national societies recommend limiting routine laboratory studies to incidents with a specific clinical question (eg, drawing blood for a lactate level only if there is a clinical suspicion about sepsis or impaired perfusion).5 

Nurses’ intimate knowledge of health care data is also essential for detecting unfair systemic errors, also called bias, in an AI algorithm.2  Nurses create and record health care data and therefore understand that real-world data do not equitably reflect all types of patients because of systemic problems such as access to care. For example, people who lack insurance or live in rural areas may not have access to health care and may not be well represented in health care data, especially in the data generated from routine or nonemergency appointments. Consequently, there is a risk for harm when AI algorithms are created using real-world health care data and then applied to make decisions about patients who are underrepresented in the data. When nurses are included in the AI development process, they can mitigate this risk for harm by identifying types of patients underrepresented in the training data so that appropriate measures can be taken to address and minimize potential biases in the AI algorithm, ensuring equitable and inclusive health care for all patients.

As nurses integrate AI into practice, they should remain vigilant in assessing the accuracy, level, and quality of the information provided by AI algorithms. Many AI algorithms are “black boxes,” meaning information about how the algorithm made its decision is not accessible. This lack of transparency is a problem because clinical decisions should always be made on the basis of sound clinical judgment, and AI algorithms do not have clinical judgment. Although AI algorithms can provide useful information, they should always be viewed as tools to supplement nurses’ decision-making rather than supplant it. Like the application of levels of evidence in practice, decision-making enhanced by AI requires careful interpretation to ensure its relevance to individual patients. By using nursing knowledge and considering the algorithmic results as just one aspect of decision-making, nurses can ensure that patient care remains person centered, safe, and aligned with clinical judgment.

Nurses using AI in clinical practice are also advised to consider whether the AI algorithm enhances or detracts from their workflow. For example, in one health care organization, a facility-developed AI algorithm was used to assist with patient triage in the emergency department (C. R. Spencer, MD, email, July 27, 2023). Although the algorithm seemed to work well, the system required nurses to manually input clinical information into a separate interface, which was a time-consuming process. The lack of integration with existing workflows ultimately resulted in nurses spending more time documenting and less time assessing patients—the opposite of a safe triage process. In contrast, the same hospital system used an AI algorithm to provide real-time medication recommendations based on patient data, alerting nurses to potential medication errors or drug interactions. The algorithm was integrated into the electronic health record system, automatically flagging relevant patient information and displaying recommendations within the nurses’ existing medication administration workflow. This AI integration was aligned with the nursing workflow; it supported nurses in making informed medication decisions without taking time away from patient care.

Nurses can and should advocate for accuracy, transparency, safety, and fairness during the AI creation and application process. Nurses will likely be among the first to recognize a problem if an algorithm begins to suffer in performance. Today, when AI technologies are being created and adopted in health care settings at a quick pace, nurses must be up to date and willing to challenge the foundation on which AI algorithms are developed. Through active participation in the creation and application of AI, nurses can help mold the future of health care, safeguarding its fairness and safety in the process.

1
Escobar
GJ
,
Liu
VX
,
Schuler
A
,
Lawson
B
,
Greene
JD
,
Kipnis
P
.
Automated identification of adults at risk for in-hospital clinical deterioration
.
N Engl J Med
.
2020
;
383
(
20
):
1951
1960
.
2
Koski
E
,
Murphy
J
.
AI in healthcare
.
Stud Health Technol Inform
.
2021
;
284
:
295
299
. doi:
3
Rajpurkar
P
,
Chen
E
,
Banerjee
O
,
Topol
EJ
.
AI in health and medicine
.
Nat Med
.
2022
;
28
(
1
):
31
38
. doi:
4
Liu
R
,
Greenstein
JL
,
Fackler
JC
,
Bembea
MM
,
Winslow
RL
.
Spectral clustering of risk score trajectories stratifies sepsis patients by clinical outcome and interventions received
.
eLife
.
2020
;
9
:
e58142
. doi:
5
Society of Critical Care Medicine
.
Choosing wisely in critical care
.
2023
. Accessed June 6, 2023.

Footnotes

To purchase electronic or print reprints, contact the American Association of Critical-Care Nurses, 27071 Aliso Creek Rd, Aliso Viejo, CA 92656. Phone, (800) 899-1712 or (949) 362-2050 (ext 532); fax, (949) 362-2049; email, [email protected].