RS1 Sepsis Incidence: Nonventilator Hospital-Acquired Pneumonia Versus Pneumonia as an Admitting Diagnosis

Karen Giuliano, Dian Baker, Barbara Quinn; Northeastern University, Boston, MA

Purpose

In the fourth study of our hospital-acquired pneumonia prevention initiative, the incidence of sepsis associated with nonventilator hospital-acquired pneumonia (NV-HAP) was compared with the incidence in patients admitted with pneumonia (AP). Specifically, we investigated: (1) sepsis incidence, (2) differences in hospital length of stay (LOS) and total hospital charges, and (3) the population characteristics of patients with NV-HAP or AP in whom sepsis developed.

Background/Significance

Despite the substantial worldwide efforts to decrease sepsis, sepsis incidence and related mortality rates continue to increase. Current efforts are focused on early recognition and treatment; preventive strategies have not been deployed as aggressively. Although fundamental benefits of prevention-oriented strategies have been recognized by the Centers for Disease Control and Prevention, sepsis prevention through prevention of infection, particularly hospital-acquired infection, remains a clinical challenge.

Method

The 2012 Healthcare Utilization Project data set was used to identify patients with a diagnosis of sepsis associated with either NV-HAP or AP. We compared overall sepsis incidence, LOS, and total charges between NV-HAP patients and AP patients. We also compared data from both groups on the following characteristics: age, sex, race, number of chronic conditions, elective versus nonelective hospital admission, operating room procedure (yes/no), admission and discharge transfer status, and in-hospital mortality. We then compared these costs with the costs associated with ventilator-associated pneumonia (VAP).

Results

Sepsis incidence associated with NV-HAP was 19 times greater than that associated with AP (36.3% vs 1.9%). LOS was significantly longer and total hospital charges were significantly greater for patients with sepsis associated with NV-HAP (both P < .001). The risk of sepsis developing was 28.8 times greater with NV-HAP than with AP. Although patients who had NV-HAP or AP and in whom sepsis developed had a greater need for additional health care on hospital discharge (NV-HAP, 7.3%-39.4%; AP, 6.0%-29.5%), the magnitude of the increase was larger for NV-HAP patients with sepsis (32.1%) than for AP patients with sepsis (23.5%). There were 16 340 more patients in the NV-HAP group than in the AP group who were transferred to other health care facilities after being discharged from the hospital.

Conclusion

NV-HAP represents a more substantial risk for sepsis-related morbidity and mortality than does AP. Total hospital charges associated with NV-HAP were 8.5 times higher than those associated with AP and VAP combined ($7 282 901 516 vs $858 705 796). With pneumonia as the leading cause of sepsis, our findings suggest that a reduction in NV-HAP may lead to a reduction in sepsis. Prevention of NV-HAP should be elevated to the same level of concern, attention, and effort as prevention of VAP in hospitals. Disclosure: When this work was started, Karen Giuliano was an employee of Stryker Medical. Barbara Quinn and Dian Baker were previously members of the Sage/Stryker speakers’ bureau. Sage did not provide funding or resources for this study.

RS2 Clinical Institute Withdrawal Assessment Implementation: Improving Care of Alcohol-Dependent Patients

Karen Gonzales, Thaddeus Love, Joan Aquino; St Joseph Hospital, Orange, CA

Purpose

Early recognition of alcohol withdrawal and improvement of staff awareness to help prevent or minimize the complications and consequences of alcohol withdrawal, using the Clinical Institute Withdrawal Assessment (CIWA) tool. The main goal is to reduce the percentage of unidentified or under-treated patients going through alcohol withdrawal, to decrease the length of unnecessary hospitalization, decrease severity of symptoms, prevent patient and staff injury, and improve treatment management.

Background/Significance

Twenty percent of hospitalized patients have a history of unhealthy alcohol use, and 75% of these patients are alcohol dependent. Alcohol dependence and the potential development of alcohol withdrawal due to abrupt cessation of alcohol is an often-overlooked problem. Difficulty differentiating between critical illness and alcohol withdrawal often leads to underrecognition until the patient experiences severe symptoms that result in poor outcomes and safety concerns for the patient and staff.

Method

Using the Iowa Model of Evidence-Based Practice to improve early identification and nursing knowledge, a CIWA team consisting of nursing staff was created. Relevant literature on alcohol withdrawal and the CIWA tool was reviewed. A pilot study was conducted on the Definitive Step-Down Unit (DSU) during a 5-month period in the first and second quarters of 2017. Education for nursing and physicians was provided, including pretests and post-tests to measure knowledge. A CIWA algorithm was created along with the implementation of the CIWA tool. A retrospective chart review for all at-risk patients admitted or transferred to the DSU was completed.

Results

A total of 45 at-risk patients were identified by alcohol consumption screening, prior history, current signs and symptoms of withdrawal, and sitter or restraint use. Of these 45 patients, 23 had poor outcomes due to experiencing withdrawal symptoms after a substantial delay in screening, assessment, and intervention. The mean delay was 24 hours. By the end of the pilot study, poor outcomes in patients going through alcohol withdrawal was significantly reduced (from 70% to 32%). DSU nursing staff expressed increased knowledge and confidence, with a 30% increase in surveyed knowledge scores. Chart audits showed an increase in completed screenings and CIWA assessments and an increase in timely detoxification orders.

Conclusion

Thanks to the success of this project, education is expanding to other medical units of the hospital. The CIWA team is evolving, and continued research and cases studies will be completed to continue positive patient outcomes. This project implementation using the Iowa Model of Evidence-Based Practice was successful; education and training are highlighted as key components. An important part of implementation is using the information provided by the CIWA tool to manage patient care accordingly.

RS3 Giving a Voice to the Voiceless: Effects of Awaiting Legal Guardianship for End-of-Life Decisions

Lori Davis, Ann Pedack, Lucy Greenfield, Kelly King; UW Medicine, Seattle, WA

Purpose

Advances in medical technology create an environment where death in intensive care units is no longer an anomaly. Many patients have not documented or discussed their end-of-life (EOL) wishes, which leads to an extensive hospital stay without improvement in health or quality of life. This study, at a level I trauma and academic medical center in the Pacific Northwest, compared length of stay and cost differences between patients with and without legal decision makers.

Background/Significance

Medical advances have increased the number of persons spending their last days of life receiving intensive care. In a retrospective national study, researchers found that 1 in 5 Americans dies in an intensive care unit. Other studies document that up to 43% of adults near the EOL are unable to make decisions about their medical care, resulting in a surrogate making decisions for them.

Method

In this retrospective, case-control matched study, the effects of guardianship availability on EOL decisions were compared. The cases were patients who died after a decision by a court-appointed guardian to withdraw care. Case patients were identified by using a list of University of Washington Health Care System patients who required guardian appointment from 2003 to 2016. Patients in the control group were matched for age, admitting diagnosis, and morbidity scores; those data were obtained from a Harborview Medical Center database. Death of patients in the control group occurred after decisions by family or a person with previously appointed durable power of attorney (DPOA) to withdraw care. Differences in EOL outcomes were evaluated by using χ2 analysis.

Results

The sample included 31 case patients and 303 control patients. Age, sex, and Acute Physiology and Chronic Health Evaluation scores were similar, whereas health care coverage differed significantly between case and control patients (P < .001). The mean (SD) length of stay for patients without a surrogate was 40 (16) days, whereas that for patients with a surrogate was 9.5 (15) days (P < .001). Direct, indirect, and total costs differed significantly between groups (P < .001). Total mean cost for case patients was $158 332 (SD $75 457) and for control patients was $55 329 ($77 615).

Conclusion

All patients, every time, need to have their voice heard and a surrogate identified as an integral part of the health care continuum. We are developing strategies to streamline the guardianship process. These include changing legislation around EOL decisions for patients without a legal surrogate, instituting educational programs for health care providers to identify and document clients’ DPOA, and updating the electronic record and admission process to prioritize identification of a legal surrogate.

RS4 Scaling Up and Validating a Nursing Acuity Tool to Ensure Synergy in Pediatric Critical Care

Jean Connor, Christine LaGrasta, Patricia Hickey; Boston Children’s Hospital, Boston, MA

Purpose

To address the need for practical tools that capture multiple domains or attributes of acuity, a facilitator led the development of the Complexity Assessment and Monitoring to Ensure Optimal Outcomes (CAMEO) acuity tool. The CAMEO tool defines acuity in terms of nursing cognitive-workload complexity: the intellectual processing of information about patients that drives critical thinking, decision-making, and the resulting level of surveillance necessary to meet patient needs.

Background/Significance

Nursing productivity has been measured to describe and quantify nursing workload, intensity, and resource allocation. In pediatric critical care, most measurement of patient acuity has focused on physiological status to predict patient outcomes, length of stay, and resource use. Although these tools have demonstrated scientific usefulness, they are not sufficiently comprehensive to inform nurse staffing assignments and, in general, have limited practical application.

Method

Validation of the CAMEO acuity tool was undertaken by an expert clinical panel with representation from 4 intensive care units. Given the lack of a gold standard with which to compare the CAMEO acuity tool, construct validation was conducted using a pediatric classification system, the Therapeutic Intervention Scoring System (TISS-C); and 2 pediatric physiological acuity tools, Pediatric Risk of Mortality III (PRISM-III) and, for patients in the neonatal intensive care unit, the Score for Neonatal Acute Physiology with Perinatal Extension (SNAPPE II). Convergent and divergent validities of the CAMEO score versus the TISS-C, PRISM-III, and SNAPPE II scores were assessed using Spearman rank correlation coefficients. A ρ value greater than 0.5 was considered a strong correlation, 0.35 to 0.5 was considered a moderate correlation, and 0.2 to 0.34 was considered a weak correlation.

Results

Among the 235 completed CAMEO acuity tools across the intensive care units (ICUs), the mean total score was 99.06 and the median total score was 97.00 (range, 59-204). The number and percentage of patients by CAMEO complexity classification (range, I-V) of the 235 patients was: I, 22 (9.4%); II, 53 (22.6%); III, 56 (23.8%); IV, 66 (28.1%); and V, 38 (16.2%). Histograms of the scores for the CAMEO acuity tool, TISS-C, and PRISM-III were all positively skewed. Findings from 235 patients across the 4 ICUs revealed a significant correlation between the CAMEO and the TISS-C scores (ρ = 0.567; P < .001), CAMEO and PRISM-III scores (ρ = 0.446; P < .001), and the CAMEO and SNAPPE II scores (ρ = 0.359, P = .01).

Conclusion

Information from this examination revealed a moderate to strong significant correlation in the 4 pediatric ICUs. The CAMEO acuity tool provides a comprehensive description and quantification of how nurses assess direct and indirect patient and family needs and then match personal skill sets to provide optimal nursing care. These findings have supported the initiation of a multisite validation of the CAMEO acuity tool in pediatric hospitals.

RS5 Are Identification Badges Worn by Health Care Workers in Intensive Care Units Possible Fomites of Hospital-Acquired Infection?

Carol Cadaver, Devika Patel; Children’s Hospital Los Angeles, Los Angeles, CA

Purpose

This study addressed the following research question: Are identification badges worn by bedside health care workers (HCWs) fomites in the transmission of infection to patients in the pediatric cardio-thoracic intensive care unit (ICU)?

Background/Significance

Hospital-acquired infections (HAIs) can result in longer stays, higher health care costs, and increased morbidity and mortality rates in hospitalized patients. Patients younger than 2 years who require care in the ICU have high rates of HAIs. Despite the use of evidence-based infection control measures, outbreaks of HAIs have occurred. Investigation of sources of contamination that increase the transmission of HAIs is needed.

Method

This mixed-method pilot study involved recruitment of 30 HCWs, including physicians, respiratory therapists, and nurses, in a pediatric cardiothoracic ICU. The quantitative measurement involved swabbing HCWs’ identification badges and then culturing the wet swabs. The qualitative measurement was a demographic questionnaire regarding the participants’ badge-wearing behaviors, to identify possible risk factors for badge contamination.

Results

Of the swab cultures, 46.7% were positive for bacterial growth and 53.3% of cultures showed no growth. Of the positive cultures, 65% grew Staphylococcus species; of these, 90.4% were coagulase negative. All of the bacteria cultured can cause nosocomial infection in immunocompromised patients. Although the HCWs identified various ways of wearing and caring for their identification badges, there did not seem to be a large discrepancy in the percentage of positive cultures among the varied practices.

Conclusion

Microorganisms that can cause HAIs are present on bedside HCWs’ identification badges; therefore, these badges are fomites for HAIs. The presence of nosocomial bacteria on identification badges worn by bedside HCWs does present a risk to hospitalized patients; therefore, we suggest taking measures to clean badges and not allow them to contact patients or their surroundings. These measures may assist in decreasing the incidence of HAIs.

RS6 Comparing Outcomes in Manual and Automatic Prone Positioning Therapy for Acute Respiratory Distress Syndrome

Lauren Morata, Mary Lou Sole, Carrie Ogilvie, Rebecca Anderson; Lakeland Regional Medical Center, Lakeland, FL

Purpose

Moderate to severe acute respiratory distress syndrome (ARDS) is a complex disease with a high mortality rate. Prone positioning therapy is an effective treatment option that helps reduce mortality among patients with ARDS. Nurses are responsible for safe and effective patient positioning by either manually placing patients prone or using an automatic proning bed to do so. The purpose of this study was to analyze various outcomes associated with manual versus automatic prone positioning therapy in patients with ARDS.

Background/Significance

Prior research on prone positioning therapy in ARDS has focused on mortality benefit, yet, to our knowledge, no study has compared the outcomes between manual and automatic prone positioning therapy. The multidisciplinary team of an 849-bed tertiary referral center implemented an evidence-based prone positioning protocol to guide the use of manual and automatic prone positioning therapy. Comparison of outcomes between the 2 groups will assist other institutions to make decisions about methods of pronation and implement protocols to promote safe practice.

Method

After approval was received from the institutional review board, a retrospective, descriptive comparative approach was used to analyze data from 37 adult patients whose condition met the Berlin definition of moderate to severe ARDS. All patients received either manual or automatic prone positioning therapy between November 1, 2014, and November 30, 2016. Data were part of a quality improvement database initiated at the start of protocol implementation. Statistical analysis included χ2 test for complications and discharge disposition, and Mann-Whitney U test for time to initiating prone positioning from physician order and for intensive care unit (ICU) and hospital length of stay (LOS). A cost analysis was used to evaluate the cost associated with each therapy.

Results

Manual and automatic prone positioning therapies were used for 16 and 21 patients, respectively. Time to initiation was similar between groups. Patients undergoing automatic prone positioning therapy were more likely to experience pressure injuries (P = .04), especially of the head (P = .003), thorax (P = .003), and lower extremities (P = .047). Other complications did not differ significantly between groups. Although the difference was not statistically significant, patients placed prone manually had shorter ICU and hospital LOS (7.1 and 6.5 days, respectively) compared with patients undergoing automatic prone positioning. In addition, patients undergoing manual prone positioning therapy were more likely to be discharged home than were patients who had automatic prone positioning therapy (43.8% vs 28.6%).

Conclusion

Owing to the small sample size, additional research is needed to determine if manual or automatic prone positioning therapy is preferred. However, these results suggest that manual prone positioning therapy is safer, has lower complication rates, and may be more efficacious, as indicated by decreased LOS and discharge disposition. When automatic prone positioning therapy is required (eg, morbid obesity limiting safe manual pronation), nursing interventions are important to protect the patient’s skin from pressure injuries.

RS7 Reduction in Central Catheter Use Owing to an Ultrasound-Guided, Extended-Dwell Intravenous Catheter

Jona Caparas; Mount Sinai Medical Center, New York, NY

Purpose

Ultrasound-guided peripheral intravenous access is becoming the standard of care for use in patients in whom gaining intravenous access is difficult. Although most studies have been focused on improved outcomes of catheters placed without ultrasound guidance, little attention has been given to catheter performance. This study compared ultrasound-guided peripheral intravenous access using a standard polyurethane catheter (PIV) versus using a novel extended-dwell catheter (EDPIV).

Background/Significance

Ultrasound-guided PIVs have a successful cannulation rate but disappointingly low dwell times. For example, using a 2.5-inch (6.35 cm), 18-gauge angiographic catheter, Dargin and colleagues reported a median dwell time of only 26 hours and an overall intravenous survival rate of 56%. Given the time, effort, and expense necessary for ultrasound placement of a PIV, clinicians need vascular access devices capable of lasting an extended time without complications.

Method

This was a single-center, prospective cohort study extending from May 1 to June 30. The PIV used was either an 18-gauge/1.88-inch (4.78 cm) or 20-gauge/1.88-inch over-needle polyurethane catheter, as selected by the clinician at the bedside. The EDPIV used was a 3F/2.4-inch (6.10 cm) overwire, ChronoFlex C catheter (AdvanSource Biomaterials). All catheters were placed with sterile technique (including sterile probe cover) using the dynamic ultrasound method in the transverse axis. Skin antisepsis was achieved with 2% chlorhexidine gluconate. Securement was done with a 3.5 × 4.5-inch (8.89 × 11.43-cm) bordered transparent dressing.

Results

A total of 361 patients had catheters placed under ultrasound guidance: 278 who received an PIV, and 83 who received the EDPIV. The mean dwell time in the PIV group was 5.08 days (range, 1-16 days); 5.7% had their central catheters removed upon placement of the PIV. The mean dwell time in the EDPIV group was 10.7 days (range, 1-29 days); 37.3% had their central catheters removed upon placement of the EDPIV. Many patients in the EDPIV group were discharged home with a prescription for antibiotics; had they been followed up, the EDPIV dwell time might have been even longer. Central catheter use (ie, percentage of central catheters per overall patient-days) decreased from 20.3% during the same period the previous year, during which EDPIVs were not used, to 17.2%— a 15.3% reduction.

Conclusion

The EDPIV catheter outperformed the standard PIV, lasting more than twice as long. Most important, during the trial period, 37.3% of patients who received the EDPIV were able to have their central catheters removed upon EDPIV placement, resulting in a 15.3% decrease in central catheter use.

RS8 Alterations in Perfusion Are Associated With Delirium in Patients in the Surgical Critical Care Unit

Jenny Alderden; Boise State University, Boise, ID

Purpose

To determine whether factors associated with alterations in delivery of oxygen to the brain are associated with development of delirium. The specific aim was to determine whether alterations in blood pressure (BP), peripheral capillary oxygen saturation (Spo2), and/or hemoglobin (oxygen-carrying capacity) were associated with delirium.

Background/Significance

Delirium is a serious problem among critical care patients. Although some studies have identified factors associated with delirium, most studies are conducted in noncritical care populations and results are inconsistent. One potential mechanism for the development of delirium is inadequate oxygen delivery to the brain. This is particularly relevant in the intensive care unit (ICU), where a patient’s physiological status is dynamic and where hypotension and/or low oxygenation are relatively common.

Method

Information about delirium, hemoglobin level, BP, and Spo2 was obtained from the electronic health record. The sample consisted of patients in a surgical ICU at a level I trauma center and academic medical center. Delirium was assessed by using the Confusion Assessment Method-ICU. The minimum hemoglobin level for each patient was recorded. Hypotension was defined as 3 or more consecutive systolic BP readings of less than 90 mm Hg. We used 3 consecutive readings to control for spurious readings that occasionally occur with hemodynamic monitoring. Similarly, we defined decreased oxygenation as 3 or more consecutive pulse oximeter readings of less than 90%.

Results

Among 2963 patients, 2347 (79.2%) did not have delirium, whereas 491 (16.6%) experienced delirium. Delirium could not be assessed in 125 patients (4.2%) because of decreased level of consciousness; those patients were excluded from the analysis. The mean age was 55 years (SD, 18 years). Individuals with delirium had lower minimum hemoglobin values (mean, 8.3 mg/dL; SD, 1.9 mg/dL) than did people who were not delirious (mean, 9.7 mg/dL; SD, 2.3 mg/dL; t2826 = 12.69; P < .001). Patients with delirium were also more likely to experience hypotension (systolic BP < 90 mm Hg; χ12=122.8[n=2838] [n = 2838]; P < .001) and to have Spo2 lower than 90% (χ12=132.2[n=2838]; P < .001).

Conclusion

Our findings show that factors associated with oxygen delivery to the brain and other organs are significantly associated with development of delirium. Because patients with perfusion alterations are at high risk for delirium, those patients may benefit from maximal interventions to prevent or ameliorate delirium, such as environmental modifications and careful attention to sleep/wake cycles. Funding: National Institute of Nursing Research, National Institutes of Health (T32NR01345 and F31NR014608).

Disclaimer: The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

RS9 Exploring the Lived Experiences of Patients Who Have Participated in an Early Mobility Program

Amy Doroy; UC Davis Medical Center, Sacramento, CA

Purpose

The lived experience of people who were critically ill and enrolled in an early mobility program is described. The main objectives of the research were to better understand patients’ experiences in the intensive care unit (ICU), to identify facilitators and barriers to participation in early mobility in the ICU from the patient’s perspective, and to ascertain challenges to recovery.

Background/Significance

Studies have shown that the traditional therapies of keeping patients heavily sedated and immobile cause harm. Many hospitals are now using an ABCDEF bundle of care delivery (A: assessment, prevention, and treatment of pain; B: both spontaneous breathing and awakening trials; C: choice of analgesia/sedation; D: delirium assessment, prevention, and treatment; E: early mobility and exercise; and F: family engagement/empowerment), where patients are sedated less and receive physical therapy. Although studies have shown improved patient outcomes, no studies to date, to our knowledge, have examined the patient experience of being enrolled in an early mobility program and being less sedated and mobilized early in the hospitalization.

Method

A phenomenological design was chosen to study the experience of being a patient in an ICU that uses an early mobility bundle. Inclusion criteria for the study were as follows: admitted to the ICU, intubated for longer than 48 hours, ICU stay longer than 3 days, and age 18 years or older. Exclusion criteria included being a prisoner, drug overdose, dementia, and developmental delay. Institutional review board approval was obtained in June 2015, and a total of 12 patients were interviewed. A semistructured interview script was developed in consultation with qualitative experts. A nondirective style of interviewing that uses open-ended questions to allow the participants the opportunity to discuss and describe their experiences was used.

Results

An overall theme of loss of self was identified through analysis of interview recordings and transcripts. In addition, participants described pain and discomfort related to the ventilator and presence of a fragmented reality; participants described loss of memories, confusion, and/or delirium-related recollections of experiences that occurred while in the ICU. Patients no longer in the ICU, including patients in whom delirium was not recognized during their hospital stay, described having nightmares, missing memories, hallucinations, and poor cognition after discharge.

Conclusion

Based on the results of this study, there appear to be major effects on patients’ sense of self and physical and mental health during and after discharge from the hospital that need to be addressed. According to these findings, focused inpatient interventions of an early mobility program are not sufficient to improve patients’ experiences in the ICU and after discharge, and the role of follow-up care and treatment needs to be explored in this population.

RS10 Incidence and Risk Factors Associated with Hyperactive, Hypoactive, and Mixed Delirium

Sharon O’Donoghue, Dorothea Devanna, Alistair Johnson; Beth Israel Deaconess Medical Center, Boston, MA

Purpose

A retrospective secondary analysis to distinguish the incidence and risk factors associated with 3 types of delirium in adult critically ill patients: hyperactive, hypoactive, and mixed. Risk factors, stratified by type of delirium, were identified and modeled to improve early identification of delirium and as an initial step toward identifying practices that could mitigate the negative effects of delirium.

Background/Significance

Delirium has been cited as occurring in up to 80% of patients in the intensive care unit (ICU). Time spent in delirium has been attributed to long-term cognitive and functional deficits. Although the incidence of delirium is high, it is often underrecognized by ICU clinicians. A key to prevent the negative outcomes of delirium is early identification. Identification of the different risk factors specific to each type of delirium may help to improve the identification of delirium.

Method

Data stored in the Medical Information Mart for Intensive Care were used for this retrospective secondary analysis. The study population consisted of all patients admitted to 2 adult medical ICUs at 1 academic medical center who screened positive for delirium with the Confusion Assessment Method-ICU. Patients were excluded if they were not assessed for delirium on admission or if their first assessment was positive. Extracted data included incidence and risk factors such as age, sex, fever, tethers, and severity scores. To assess the relationship between risk factors and delirium, Cox proportional hazards models with time-varying coefficients were constructed.

Results

Of the 2817 patients, 1398 met inclusion criteria, and delirium developed in 278 (19.9%). Of the 278 patients with delirium, 71 had hyperactive delirium (incidence, 25.5%) and 207 had the hypoactive form (incidence, 74.5%), and mixed delirium developed in 156 (incidence, 56.1%). Most of the patients with hyperactive delirium (n = 64; 90%) and 44% (n = 92) of those with hypoactive delirium converted to mixed delirium. The models showed that certain risk factors were associated with occurrence of the 3 types of delirium. Severity scores and physical restraints were associated with each type of delirium.

Conclusion

Patients with delirium in the ICU have much poorer outcomes than do patients without delirium. As a first step to mitigate delirium’s negative effects, ICU clinicians must be able to accurately identify it. Expanding the current knowledge of the incidence and risk factors associated with the different types of delirium could improve delirium identification. This project also provides a foundation for exploration of interventions that may be more effective in improving outcomes for a specific type of delirium.

RS11 CORTRAK-Assisted Feeding Tube Insertion Competency Assessment: Superuser Training Recommendations

Annette Bourgault, Laura Gonzalez, Lillian Aguirre, Joseph Ibrahim; University of Central Florida and Orlando Regional Medical Center, Orlando, FL

Purpose

To explore factors influencing competency of superusers who perform CORTRAK (Halyard Health) assisted feeding tube (FT) insertion. CORTRAK superusers received initial hospital-based training and 3 competency assessments, but their level of ongoing competency was unknown. The specific aims of this study were to assess CORTRAK superuser competency and explore factors influencing competency with the CORTRAK system. This study is aligned with the mission of the American Association of Critical-Care Nurses to drive excellence and rely on expert knowledge and skill.

Background/Significance

Safe FT placement is compromised by lack of valid verification methods. CORTRAK’s real-time visualization can help avoid unintentional lung placement, yet 89% of adverse events reported by the Food and Drug Administration were due to user error. Positive FT insertion outcomes are associated with highly trained, experienced CORTRAK superusers, yet training is variable. Initial competency is often concluded by observation of 2 or 3 CORTRAK-assisted FT insertions. The ideal number of FT insertions to gain and maintain competency is unknown.

Method

In a prospective, observational pilot study, CORTRAK-assisted FT insertion competency, confidence, and self-efficacy were assessed. Critical care nurses who were COR-TRAK superusers at a tertiary care hospital in central Florida were recruited to participate. Data collection included demographics, case studies on CORTRAK-assisted FT insertion, CORTRAK-assisted FT insertion competency, and measures of self-confidence and self-efficacy. FT insertions were performed in random order using 2 task trainers, CORMAN and Anatomical Box (both from Halyard Health). A vendor-developed CORTRAK competency checklist was used to assess competency. Limited variations were observed in the data; therefore, only descriptive statistics are presented.

Results

Superusers (N = 20) with 13 years (SD, 9.49 years) of FT insertion experience participated. They had inserted 53 CORTRAK FTs (SD, 31.76 FTs) since initial training 8 months (SD, 15.75 months) earlier, inserted 2 CORTRAK FTs per week (SD, 1.38), and had inserted at least 1 FT within the past 7 days (SD, 6.27 days). All superusers were competent; 1 required remediation for receiver unit placement. The mean self-confidence score for CORTRAK-assisted FT insertion was 4.6 out of 5 (SD, 0.68) and the score for demonstrated confidence observed by the researcher was 4.85 out of 5. The mean self-efficacy score was 35 out of 40 (SD, 3.68). Participants estimated that 10 FT insertions (SD, 7.33 insertions) were needed to become confident and 8 FT insertions (SD, 5.49 insertions) were needed before they felt competent as a superuser.

Conclusion

We recommend a minimum of 3 observations to assess initial FT insertion competency. The number of observations to determine competency should be individualized to the novice superuser. Ongoing competency reassessment should be included in all CORTRAK training plans. In addition, the number of designated superusers in an organization should be limited to ensure that each clinician has the opportunity to insert at least 2 FTs per week to maintain competency and confidence in this high-risk skill.

Disclosure: This study was funded and equipment loaned for the study by Halyard Sales. Halyard owns the CORTRAK device that was studied, but the company had no influence over the study results or information dissemination. Joseph Ibrahim has financial involvement with Prytime Medical as a speaking consultant.

RS12 Say YES to the Breasts! Comparing Poststernotomy Breast Support Satisfaction and Compliance

Kimberly Bolling; Carilion Clinic, Roanoke, VA

Purpose

This clinical nurse study compared satisfaction and wear compliance among 3 different breast supports/bras (Velcro [hook and loop, current standard of care], zipper, and hook-and-eye front closures) for larger-breasted women who have undergone cardiac surgery. We used the study results to develop an evidence-based protocol regarding poststernotomy bra selection and use.

Background/Significance

Women (especially those with breast cup size C and larger) undergoing cardiac surgery via median sternotomy incision benefit from breast support to reduce pain, wound breakdown, and infection. Although the assumption is that a patient-preferred bra will be worn consistently, many factors influence use and choice of bra: incision, tubes, expectations, preference, and inconsistency of nursing instruction. This study addressed the gap in recent literature identifying the best bra for women after undergoing sternotomy.

Method

After approval was received from the institutional review board, a randomized, 3-group, posttest-only control group design was used to compare 3 commercially available bras for satisfaction and wear compliance. In a convenience sample, 60 women were sized and randomly assigned to receive the current standard-of-care bra or another study bra placed right after surgery. Of these, 3 women were excluded who were larger than the available zippered product could accommodate. Participants agreed to wear the bra at least 20 h/d until cleared by their health care provider. At 2 postsurgery time points (day 5 or discharge day, and follow-up office visit), the women completed investigator-developed surveys. Quantitative statistics were computed, and written comments were evaluated.

Results

Satisfaction and wear compliance were least with the hook-and-loop product, of the products studied. Significant differences were detected (P = .03) between hook-and-loop and hook-and-eye products for satisfaction at follow-up, with preference for hook-and-eye closures. Women recommended the hook-and-loop–closure bras at a significantly lower rate than they did the zipper-closure bras (P = .04) and the bras with hook-and-eye closures (P = .02). Variability in recommendation rating was greatest for the hook-and-loop closure and least for the hook-and-eye closure. Although wear compliance did not differ significantly between products, the percentages of women, by closure type, who wore the bra 7 days a week were as follows: hook-and-loop, 70%; zipper, 85%; and hook-and-eye, 89%.

Conclusion

Findings support that the standard-of-care bra (hook-and-loop front closure) is not the best product, in terms of satisfaction and wear compliance, for women after undergoing sternotomy. The zipper product does not come in sizes for larger women and zipper malfunction was frequent. Based on wear compliance, comfort, and preference, the hook-and-eye–closure bra was the superior product in this study, and findings support a change in practice to that product.

RS13 Nurses’ Perceptions, Self-confidence, and Invitation of Family Presence During Resuscitation

Kelly Powers, Charlie Reeve; University of North Carolina, Charlotte, NC

Purpose

Studies have shown psychological benefits for family members who participate in family presence during resuscitation (FPDR). Because intensive care unit (ICU) nurses often have opportunities to initiate FPDR, it is important to understand what factors are associated with improved FPDR perceptions, self-confidence, and invitation. Thus, we aimed to describe ICU nurses’ perceptions and self-confidence related to FPDR, determine the factors influencing FPDR invitation by nurses, and evaluate differences according to demographic and professional factors.

Background/Significance

On the basis of research evidence showing that FPDR can be beneficial, professional organizations have published practice alerts and guidelines stating that family members of patients in the ICU should be given the option to participate in FPDR. Despite this guidance, FPDR remains controversial and is not widely implemented. Although some studies have investigated relationships between nurses’ demographic and professional factors and their decision to invite FPDR, the evidence to date is contradictory and inconclusive.

Method

A cross-sectional survey design was used to collect data from a convenience sample of 395 ICU nurses in the United States. Online data collection occurred during 4 weeks in 2016. Demographic and professional information was collected. Measurement of dependent variables included (1) self-reported frequency of asking family if they wanted to be in the room during cardiopulmonary resuscitation (CPR; invitation), (2) the Family Presence Risk-Benefit Scale (perceptions), and (3) the Family Presence Self-Confidence Scale (self-confidence). Analysis began with descriptive statistics and zero-order correlations, and then multiple regression was used to identify the most influential factors.

Results

Despite high frequency of performing CPR, 33% of participants had never invited FPDR and 33% had invited it just 1 to 5 times. Perceptions and self-confidence were strongly associated with nurses’ invitation of FPDR (P < .01). Having a higher level of education, clinical experiences with FPDR, and a policy on FPDR were the strongest predictors of improved perceptions (R2 = 0.26; P < .01). For self-confidence, increased years of nursing experience and clinical experiences with FPDR were the strongest predictors (R2 = 0.21; P < .01). Finally, having had FPDR education, clinical experiences with FPDR, and a policy on FPDR most strongly predicted whether nurses invited FPDR (R2 = 0.60; P < .01).

Conclusion

Results suggest that perceptions, self-confidence, and invitation of FPDR may be enhanced by a few modifiable factors. The most influential, modifiable factors noted were having facility policy, education, and clinical experiences related to FPDR. Nurses who possessed these factors had improved perceptions and self-confidence, and also invited FPDR with increased frequency. Practice recommendations are to create policy, provide education, and promote clinical experiences with FPDR.

RS14 Aspiration in Patients Receiving Mechanical Ventilation: Intubation Factors and Associated Outcomes

Mary Lou Sole, Steven Talbert, Aurea Middleton, Melody Bennett; University of Central Florida, Orlando, FL

Purpose

Aspiration of oral and gastric contents into the trachea and lungs often occurs during endotracheal tube (ETT) intubation. Opening of the glottis for intubation facilitates aspiration of secretions. Aspiration can lead to complications resulting in prolonged ventilation and ventilator-associated conditions (VAC). The study purposes were to describe the frequency of aspiration and its relationship with intubation factors and to identify the influence of aspiration on patient outcomes.

Background/Significance

Intubation is often an emergency that requires prompt insertion of the ETT. Factors at time of intubation (eg, personnel, location, urgency) may increase the risk for aspiration. Oral secretions and gastric regurgitation increase the risk for aspiration through the open glottis. Secretions from the mouth (eg, amylase) and stomach (eg, pepsin) are not normally present in the lungs. Aspiration increases the risk of lung injury, which may result in adverse patient outcomes.

Method

This is a retrospective, descriptive, comparative study of a subset of data from a recently completed clinical trial. Participants were older than 18 years, enrolled within 24 hours of intubation, and without suspected aspiration. Immediately after enrollment, tracheal specimens were collected and analyzed for presence of amylase (oral) and pepsin (gastric), using standard laboratory methods. Amylase values greater than 396 U/L and pepsin values greater than 6 ng/mL were considered positive for oral and gastric aspiration, respectively. Demographic and outcome data were collected, including intubation factors, ventilator hours, length of stay, and mortality. Data analysis included χ2 and t tests for independent samples.

Results

Data were available for 102 patients. The mean age was 59.7 (SD, 18.0) years, 56% were male, 20% were of Hispanic ethnicity, 25% reported belonging to a racial minority, and 47% had medical-surgical diagnoses. Most patients were intubated by a physician (94%) in a hospital (95%) for urgent or emergent airway management (88%). Aspiration of oral contents (78%) was more common than gastric (32%); however, 29% of patients aspirated both amylase and pepsin. No intubation or demographic factors were associated with aspiration (P > .05). Patients positive for pepsin aspiration had received ventilatory support for a longer time (156 vs 113 hours; P = .03) and had a higher rate of VAC (21% vs 7%; P = .04). No differences were noted in other outcome variables (P > .05).

Conclusion

Despite absence of documentation of aspiration at the time of intubation, aspiration of oral and gastric secretions was frequently noted via laboratory analysis. Surprisingly, factors associated with intubation did not contribute to aspiration in this subset of patients. Negative pulmonary outcomes were associated with aspiration of gastric contents rather than oral contents. Strategies to prevent silent gastric regurgitation and reflux at the time of intubation and after need to be identified. Funding: National Institutes of Health Funding 1R01NR014508

RS15 Discriminant Validity Testing of the Respiratory Distress Observation Scale

Karen Reavis, Fatsani Dogani; Sharp Healthcare, San Diego, CA

Purpose

To explore the relationships between the Respiratory Distress Observation Scale (RDOS), the Richmond Agitation-Sedation Scale (RASS), and the Critical-Care Pain Observation Tool (CPOT) in patients receiving mechanical ventilation.

Background/Significance

The RDOS is cited in numerous research articles, including The American Thoracic Society statement on dyspnea, and by the Improving Palliative Care in the Intensive Care Unit (IPAL-ICU) Advisory Board. It was designed for adult patients with cognitive impairment. In the arena of adult critical care, behavioral scales associated with discriminatory analysis of discomfort available for use with patients in the ICU who are cognitively impaired and receiving mechanical ventilation are limited.

Method

This study was a nonexperimental, descriptive, observational study with concurrent and retrospective review of medical records.

Results

Our sample consisted of 148 patients with cognitive impairment who were receiving mechanical ventilation in a medical ICU. Scores on the RDOS were compared with the CPOT and RASS agitation scores. Spearman ρ showed a correlation between the RDOS and the CPOT scores (ρ = 0.15; P = .02). There was no significant correlation between the RDOS and RASS scores (ρ = −0.02; P = .76). In addition, the CPOT and the RASS scores were slightly correlated (ρ = 0.26; P < .001).

Conclusion

Dyspnea and pain are the 2 most common symptoms experienced by patients. The correlation between the RDOS and CPOT scores is of concern because clinicians use these scores as a basis for treatment and evaluation of treatment response. Research is needed to focus on examination of within-scale components to increase differentiation between the newer RDOS and the widely used RASS and CPOT.

RS16 Association Between Fluid Overload and Delirium in Patients Receiving Mechanical Ventilation

Akira Ohuchi, Hideaki Sakuramoto, Haruhiko Hoshino, Yoshimi Hattori; University of Tsukuba Hospital, Tsukuba, Japan

Purpose

To evaluate the association between fluid overload and delirium in patients receiving mechanical ventilation.

Background/Significance

Several studies have shown an association of fluid overload with mortality or duration of mechanical ventilation in critically ill patients. Despite a common perception, it remains to be seen whether fluid overload causes brain dysfunction. Delirium is a condition characterized by acute brain dysfunction. However, to our knowledge, no research has examined the association of fluid overload with delirium in patients receiving mechanical ventilation.

Method

This retrospective, observational cohort study was conducted between April 2015 and May 2017 at University of Tsukuba Hospital in Japan. All patients admitted to the intensive care unit (ICU) were screened for eligibility, and those who met the inclusion criteria of having received mechanical ventilation for 48 hours or longer and ICU admission of 7 days or longer were enrolled in this study. Exclusion criteria were being postresuscitation, having a history of psychosis or neurologic disease, and having received mechanical ventilation for 24 hours or longer before the ICU admission. All patient data were collected from patient records. Outcomes of interest included delirium- or coma-free days (DCFDs) within the 7-day study period and mortality during the ICU admission.

Results

Of the 118 patients, 17 died during their ICU stay; the ICU mortality rate was 14.4%. The mean age of the patients was 65 years (SD, 14 years). Mean score on the Acute Physiology and Chronic Health Evaluation II at enrollment was 22.5 (SD, 6.2). Overall, 69% of patients screened positive for delirium within 7 days, and the median number of 7-day DCFDs was 3 days (range, 0.25-5.00 days). Increased calculated fluid balance and body weight on the first day was associated with fewer 7-day DCFDs by univariate analysis (P < .05 for both). After adjustment for covariates, a significant negative association was found between increase in body weight or fluid balance on the first ICU day and 7-day DCFDs (odds ratio, 0.89; 95% CI, 0.83-0.97; and odds ratio, 0.91; 95% CI, 0.84-0.98, respectively).

Conclusion

More fluid overload on the first ICU day was associated with decreased 7-day DCFDs in patients receiving mechanical ventilation.

RS17 Barriers to Early Extubation After Cardiac Surgery

Myra Ellis, Debra Farrell, Heather Pena, Mollie Kettle, Timothy Johnson, Alexandra Rudolph; Duke University Hospital, Durham, NC

Purpose

The Surgical Thoracic Society defines prolonged ventilation as longer than 24 hours and early extubation as occurring within 6 hours of surgery. The ability to extubate patients promptly after cardiac surgery is multifactorial and requires collaboration among members of the interprofessional team. The purpose of this replication study was to determine the current intubation time in stable patients who had undergone cardiac surgery and delineate barriers to early extubation within 6 hours.

Background/Significance

Early extubation in stable patients who have undergone cardiac surgery decreases resource use and length of stay (LOS) in the intensive care unit (ICU) and is associated with decreased mortality and morbidity rates without increasing postoperative complications or reintubation rates when compared with conventional care (ie, not fast tracked). In contrast, longer ventilation times are associated with increased ICU and hospital LOS, higher health costs, and increased risk of pulmonary complications.

Method

This prospective, descriptive study used a convenience sample of 101 patients who had undergone cardiac surgery and were identified by the interprofessional surgical team as stable. Data collection occurred in 2 cohorts to account for variability in cohort definitions in our academic health system in July 2016. Cohort 1 (n = 50) underwent cardiac surgery in 2016 between June and July and cohort 2 (n = 51) underwent surgery in September or October 2016. Intubation times were obtained from the electronic health record (EHR). Barriers to extubation were defined for consistency, identified from the EHR and care nurse documentation, and validated across cohorts. Barriers tracked were system and patient problems.

Results

In cohort 1, 52% of patients (26 of 50) were extubated within 6 hours (mean time, 6 hours 55 minutes; median, 5 hours 44 minutes), and 36% (18 of 50) were extubated between 6 and 8 hours. In cohort 2, 56.8% of patients (29 of 51) were extubated within 6 hours (mean time, 6 hours 55 minutes; median, 5 hours 57 minutes), and 23.5% (12 of 51) were extubated between 6 and 8 hours. Barriers to extubation were categorized in 3 system-specific groups and 1 patient-specific group: (1) work-flow issues (orders and ventilator changes not made during a 2-hour window at shift change); (2) lack of clarity about what defined stable patients (no standard definition existed); (3) variations in weaning process (eg, slow propofol weaning); and (4) patient specific (eg, respiratory or metabolic acidosis, altered mental status).

Conclusion

Findings that only 54.4% of stable patients who had undergone cardiac surgery (55 of 101) were extubated within the defined early extubation period and an additional 29.7% (30 of 101) were extubated within 6 to 8 hours suggest that eliminating barriers could significantly improve extubation times. Next steps include dissemination to the interprofessional team, standardization of eligibility criteria for early extubation, and implementation of a fast-track extubation protocol.

RS18 Impact of a Sepsis Coordinator on Outcomes of Adult Patients with Sepsis

Angela Smith; Arkansas State University, Jonesboro, AR

Purpose

The position of a sepsis coordinator was developed at an acute care facility to identify areas of opportunities in the care of patients with sepsis. The purpose of this study was to determine the impact of a sepsis coordinator on mortality rates of hospitalized patients, length of stay (LOS), and evidence-based care (completion of 3-hour sepsis-bundle care elements) in the population of adult patients with sepsis.

Background/Significance

Sepsis is a significant health problem, with 3 million cases of severe sepsis and 750 000 deaths due to sepsis estimated to occur annually in the United States. Despite efforts targeted at improving outcomes, sepsis remains the tenth leading cause of death in the United States. Evidence indicates that early recognition and prompt management of sepsis using evidence-based care can avert progression in some patients, thus saving lives and avoiding associated costs.

Method

A quantitative comparative study was conducted of data from existing medical records at a local, 425-bed acute care hospital. Patients who met the inclusion criteria (age 18-80 years with an admitting International Classification of Diseases, 10th Revision, code related to sepsis) were evaluated for 3 months (October to December 2015) before the addition of the sepsis coordinator, and the data were compared with data from the same 3 months the following year (October to December 2016) after the addition of the sepsis coordinator. Data evaluated included the primary variables of hospital mortality at discharge, hospital LOS, and the completion of the 3-hour sepsis bundle as described in the Surviving Sepsis Guidelines. Demographic variables included age, sex, and ethnicity.

Results

Mortality rates at the end of hospital stay were the same before and after the intervention (ie, addition of the sepsis coordinator; mean, 85.7%; SD, 0.36; t = 0.00; P > .99). Mean LOS was longer in the preintervention group (7.2 days; SD, 0.88 days) than in the postintervention group (6.3 days; SD, 0.72 days) but the difference was not statistically significant (t = 0.66; P = .51). However, completion of the 3-hour sepsis bundle differed significantly between the 2 study periods (preintervention group: mean, 67.8%, SD, 0.48; postintervention group: mean, 92.8%, SD, 0.26; t = 2.436; P = .02 [P ≤ .05 considered significant]).

Conclusion

The addition of a sepsis coordinator to the interdisciplinary team had a significant impact on the implementation of the 3-hour bundle, which, according to the literature, improves patient outcomes. Despite the increased compliance with Surviving Sepsis Guidelines bundle completion, mortality rate and hospital LOS did not change significantly. Although the data for these variables were not significantly different from before to after addition of a sepsis coordinator, studies including a larger sample and additional variables could prove valuable.

RS19 Factors Independently Associated With Critical Care Preceptor Self-Efficacy

Bertie Chuong, Wei Teng, Janet Parkosewich; Yale New Haven Hospital, New Haven, CT

Purpose

Nurse preceptors who have high self-efficacy (SE) in their role positively influence their orientees’ experiences. SE reflects a person’s sense of confidence that one can organize and complete a behavior competently. The purpose of this study was to examine demographic characteristics and other factors (ie, performance accomplishment, including clinical leadership behaviors; vicarious experience; social persuasion; emotional) independently associated with critical care (CC) nurse preceptors’ SE.

Background/Significance

CC nurse turnover creates an unstable environment. The key to minimizing turnover is to provide an effective orientation by well-prepared preceptors who are confident serving in this role and use clinical leadership behaviors. Preceptors often receive little preparation. They may lack confidence, knowledge, or skill needed to facilitate orientees’ learning, resulting in low preceptor SE. To our knowledge, there are no studies examining factors that shape CC preceptors’ SE.

Method

This study used a descriptive correlational design with a convenience sample of 104 CC preceptors from 16 CC areas of a large Magnet hospital. The study received human subjects approval. Nurses completed 3 study instruments via an electronic anonymous survey: The CC Preceptor Information Form, Patrick’s Clinical Leadership Survey, and Parsons’ Preceptor SE Questionnaire. Bivariate analysis was used to examine associations between the independent variables (eg, demographic, clinical leadership) and the dependent variable (ie, preceptors’ SE). Multiple regression was used to adjust for variables; P ≤ .05 indicated a factor’s independent association with preceptor SE.

Results

Most (91%) of the sample were women; the mean age was 43 years and mean time serving as a preceptor was 12 years. Higher SE scores were associated with male sex (P = .01); older age (P = .02); 4 performance accomplishment factors: higher clinical leadership score (P < .001), feeling well prepared for the role (P < .001), attended leadership class (P = .01), and attended preceptor class (P = .03); and 1 emotional factor: lower anxiety about being a preceptor (P = .001). Factors independently associated with higher SE scores were higher clinical leadership score (P = .003), feeling well prepared for the role (P = .01), and less anxiety about being a preceptor (P = .001), explaining 34% of the variance in the model.

Conclusion

This study adjusted for important factors influencing CC preceptors’ SE. The results indicate several ways to enhance SE. CC nurse leaders need to communicate with new preceptors to determine how well prepared they feel before serving as a preceptor, offer opportunities for developing preceptor and clinical leadership skills, and examine situations that cause anxiety and intervene to minimize this distressing symptom. Ongoing measurement of preceptor SE may be warranted.

RS20 Complexity Assessment and Monitoring to Ensure Optimal Outcomes (CAMEO) Acuity: A Global Perspective

Beverly Small, Christine LaGrasta, Patricia Hickey, Jean Connor; Boston Children’s Hospital, Boston, MA

Purpose

To use an acuity tool that comprehensively measures the complex care and cognitive workload performed by pediatric cardiovascular nurses in a global setting. The objective of this study was to describe and evaluate use of the Complexity Assessment and Monitoring to Ensure Optimal Outcomes (CAMEO) acuity tool, which measures the nursing cognitive workload, in a pediatric global intensive care unit.

Background/Significance

The CAMEO acuity tool quantifies the cognitive workload of pediatric nursing care and the complexity (ie, skill, concentration, and surveillance) required to provide care. The CAMEO acuity tool has been implemented in the intensive care units (ICUs) and acute care areas at our institution. Since 2006, we have conducted missions to establish a pediatric cardiac surgery program at Komfo Anokye Teaching Hospital (KATH; Kumasi, Ghana). More than 1000 children have been evaluated and 112 children have undergone pediatric cardiac surgery. A pilot study of the CAMEO acuity tool was conducted in this setting.

Method

The CAMEO acuity tool was used to document and describe the unique experience and the complexity of the nursing workload at KATH. Ten domains measured direct and indirect care items. A CAMEO tool was completed for each patient each shift on admission to the cardiac ICU. A daily log of observations was recorded for each patient to capture work that was not accounted for. Total scores for each tool were calculated and measured by using a classification scoring system (score range: I-V, where I represents stable patients requiring minimal intervention and V represents patients with clinical instability and/or complex care coordination needs). Specific activities were analyzed descriptively using counts and frequencies.

Results

Among the 12 patients who underwent cardiac surgical intervention, a total of 63 CAMEO forms were completed. Use of the CAMEO tool revealed a high level of cognitive workload. Approximately 76.2% of CAMEOs were classified as III, IV, or V, and 71.9% of the forms indicated that at least 1 procedure took place in the ICU during that shift. Sixty-seven percent of shifts required more than 6 activities for care coordination, discharge planning, and education. More than 3 professional/environmental management activities were performed during 96.5% of shifts.

Conclusion

Overall the CAMEO tool was able to capture the cognitive workload and describe the complexity of nursing during the mission at KATH. The CAMEO acuity tool was able to capture the importance of precepting, teaching, education, and acting as a consultant while highlighting how cultural and language barriers added to the cognitive complexity of the nursing workload.

RS21 Health Outcomes of Patients Repaired by Hearts and Minds of Ghana

Beverly Small, Christine Placidi; Boston Children’s Hospital, Boston, MA

Purpose

To evaluate the health outcomes of patients who underwent congenital heart disease repair by Hearts and Minds of Ghana.

Background/Significance

Congenital heart disease (CHD) is the most common congenital disease of newborns, and without surgical correction, it is often fatal. Currently, Hearts and Minds of Ghana, led by our institution, is the only pediatric cardiothoracic surgical team in West Africa. Since 2007, Hearts and Minds of Ghana has conducted surgical missions for pediatric patients with CHD at Komfo Anokye Teaching Hospital in Kumasi, Ghana.

Method

As part of follow-up clinic visits in April and October 2016, surveys were administered to patients and families of the patients who underwent CHD repair by Hearts and Minds of Ghana between 2007 and 2015. Questions included an assessment of health status and functional ability, health system use, medication use, and family planning. Queries regarding functional status were designed to be proxies for the New York Heart Association and modified Ross classifications for heart failure. School attendance and an ability to perform household chores were used as indicators of social engagement and functional status.

Results

Hearts and Minds of Ghana has performed 118 procedures between 2007 and 2016. A total of 46 patients and their families were administered surveys. As measures of their postoperative functional status, 100% of patients were going to school or graduated high school after their corrective repair. Approximately 98% were able to return to doing chores in some capacity around the house. Of those children, 80% are doing the same chores as their healthy siblings. Of the 9 female patients who qualified for the contraceptive counseling and education portion, 7 (77%) stated they were more likely to consider family planning methods and use contraception.

Conclusion

Patients who have undergone corrective cardiac repair have unique follow-up needs. Resource constraints and social determinants of health often make these needs more pronounced. This cohort demonstrates a high level of health-seeking behavior. Given the needs of this vulnerable population and their complex transition to adulthood, it is necessary to have appropriately skilled health care teams supporting their long-term health needs.

RS22 Variability in Critical Care Nurses’ Customization of Physiologic Monitor Alarm Settings

Halley Ruppel; Yale School of Nursing, Orange, CT

Purpose

Alarm customization is the process of changing alarm settings on physiologic monitors to reflect patient-specific conditions. The purpose of this study was to describe nurses’ alarm customization practices (ie, the frequency and types of alarms customized) in intensive care units (ICUs).

Background/Significance

Alarm fatigue occurs when nurses become desensitized to alarms because the alarms are often false or nonactionable. As a result, true critical events may be missed, compromising patient safety. Nonactionable alarms (true alarms that are not relevant to patient care) can be reduced by customizing alarms. However, nothing is known about if and how nurses customize alarms. Understanding current practices for alarm customization will inform educational approaches to improving nurses’ customization.

Method

Using an observational design, patient monitors were sampled from 3 ICUs (cardiac, medical, and surgical) for 2 months. Monitor alarm settings were compared with default alarm settings for each unit and deviations were recorded. Data were collected on types of alarms that were activated, deactivated, or had a change in limits (and the amount of deviation from the default), using a scannable form developed by the researcher. Owing to limitations in technology, only alarms related to electrocardiographic monitoring were recorded (eg, heart rate, arrhythmia alarms). Data were collected during various shifts and days of the week. Data were analyzed by using descriptive statistics in SAS (SAS Institute).

Results

Of the 298 patient monitors reviewed, 59% had at least 1 alarm setting that had been changed from the default. The number of alarms customized per monitor ranged from 0 to 14. Of the 175 monitors with at least 1 alarm type customized, 98 (56%) had at least 1 activated or deactivated, and 146 (83%) had at least 1 limit increased or decreased. The most commonly deactivated alarm types were irregular heart rate (n = 70) and atrial fibrillation (n = 58). Although the heart rate high limit was changed on 108 monitors, heart rate low limit was changed on only 62. Unit-specific differences were also noted (eg, nurses in the cardiac ICU were more likely to customize premature ventricular contraction alarms than were nurses in other units; P < .001).

Conclusion

Customization was frequent but variability in practice was noted within and between units. Many findings were consistent with expectations, but several areas for future examination were noted (eg, nurses’ decision to customize high but not low heart rate limits). Results from this study can be used in the development of customization education. The study is relevant to the mission of the American Association of Critical-Care Nurses because it explores nurse contributions to reducing alarm fatigue via patient-specific customization of alarms.

Disclosure: Halley Ruppel is working on a study with Philips Healthcare and Yale New Haven Hospital. She has no personal financial involvement or gain, but the hospital is receiving equipment.

RS23 Intersection Between Sepsis Not Present on Admission and Central Catheter–Associated Bloodstream Infections: Connections Aimed at Patient Safety

Sandra Tobar, Russell Olmsted, Rachel Kast; Trinity Health, Novi, MI

Purpose

Our health system is engaged in 2 initiatives: (1) reduce frequency of hospital-acquired infections (HAIs) and (2) optimize early recognition and treatment of sepsis. These are typically viewed as separate; however, our analysis identified correlation between central catheter–associated bloodstream infections (CLABSIs) and sepsis not present on admission (non-POA). The goal of this study was to quantify the increased risk ratio of sepsis and mortality associated with development of an HAI.

Background/Significance

The Centers for Disease Control and Prevention indicates that annually in the United States, more than 1.5 million people get sepsis, approximately 250 000 Americans die of sepsis, and one-third of patients who die in a hospital have sepsis. HAIs can lead to sepsis. Cases of non-POA sepsis are often more challenging to recognize. Strategies to prevent HAI are typically developed and implemented independent of sepsis-treatment programs. Better understanding of the correlation between HAI and sepsis is needed.

Method

Our institutional review board determined this work to be quality improvement. Line list records of 265 catheter-associated urinary tract infections (CAU-TIs), 1356 laboratory identified Clostridium difficile (C diff) infections, 203 CLABSIs, 72 methicillin-resistant Staphylococcus aureus (MRSA) bacteremias, and 157 colon surgical site infections (SSIs) were queried from the National Healthcare Safety Network for the first 9 months of 2016. Patients with HAI were compared with patients without HAI by using a 1:1 case-control study, matching on hospital and primary Medicare Severity-Diagnosis Related Group code in SAS. Sepsis rate, type of sepsis, and mortality rate were compared between case and control groups for each infection, and risk ratios were calculated.

Results

Data were available for 211 CAUTIs, 1116 C diff infections, 166 CLABSIs, 58 cases of MRSA bacteremia, and 127 cases of colon SSI. Sepsis rates were higher in all sites of HAIs than in control groups, with risk ratios ranging from 1.20 to 2.0 (41.7% CAUTI vs 28.9% control; 38.6% C diff vs 32.1% control, 80.7% CLABSI vs 40.4% control, 74.1% MRSA vs 41.4% control, 36.0% SSI vs 29.6% control). Rates of septic shock were also higher in the HAI population, especially CLABSI (risk ratio, 2.64). All infections had a higher mortality rate than their respective control group. Notably, patients with CLABSI had a 2.31 mortality risk ratio and patients with MRSA had a 3.98 mortality risk ratio.

Conclusion

Patients in whom an HAI develops have significantly higher risk of sepsis and mortality than do matched control patients without HAI. Thus, infection prevention and sepsis teams should work closely together, especially for patients with CLABSI or MRSA bacteremia. Strategies to facilitate the partnership include having sepsis coordinators and infection preventionists at each of the hospitals work together toward a common goal of reducing HAIs and, subsequently, non-POA sepsis on the journey to achieve no harm. Disclosure: Speakers’ bureau, Ethicon; external faculty, Health Research & Educational Trust Collaborative Projects, administered by the Association for Professionals in Infection Control & Epidemiology; faculty, Safety Institute, Premier Inc.

RS24 Effect of a Scheduled Nursing Intervention on Thirst and Dry Mouth in the Intensive Care Unit

Deborah Lampo; WellSpan Health System, York, PA

Purpose

To evaluate the effectiveness of scheduled use of cold water–moistened oral swabs and menthol lip moisturizer compared with unscheduled treatment for relieving thirst and dry mouth for patients in the intensive care unit (ICU).

Background/Significance

Thirst is a common and intense symptom reported by patients in the ICU. Evidence supports using cold-water interventions (eg, moistened oral swabs, water spray, moistened gauze) and lip moisturizer with menthol to ameliorate thirst and dry mouth. To our knowledge, no studies have prescribed frequency of use. According to findings of an audit of 30 patients in the ICU who were not receiving ventilatory support, 66% reported dry mouth with higher thirst distress and intensity scores compared with published studies.

Method

This institutional review board–approved study used a quasi-experimental design. Adult patients admitted to 2 ICUs within a large community teaching hospital were invited to participate. The scheduled intervention unit provided treatments hourly for 7 hours (n = 62). The unscheduled intervention unit provided usual care (n = 41). A numeric rating scale (range, 0-10) was used to measure thirst intensity, thirst distress, and dry mouth before and after 7 hours. Descriptive and nonparametric statistical tests were used.

Results

Both groups showed statistically significant improvement in postintervention thirst intensity, thirst distress, and dry mouth compared with before the intervention. According to results of Mann-Whitney U tests, thirst intensity (P = .02) and dry mouth (P = .008) differed significantly between groups, whereas no difference was found in the amount of change in thirst distress (P = .07) between groups. The mean use of the interventions was significantly higher in the scheduled group (P < .001).

Conclusion

Frequent use of cold water–moistened oral swabs and menthol lip moisturizer may improve patients’ reports of thirst intensity and dry mouth. By anticipating symptoms of thirst and dry mouth in hospitalized patients, nurses can confidently offer frequent access to simple, inexpensive, nurse-driven interventions to ameliorate this common discomfort. Disclosure: The George L. Lavery Foundation provided funding for research salaries and associated costs.

RS25 Can We Do Something About the Noise in Our Unit?

Patricia Meehan, Mary O’Brien, Kathleen Marine, Martha Curley; Boston Children’s Hospital, Boston, MA

Purpose

To describe the noise, sleep opportunities, and patient disturbances in a 30-bed pediatric intensive care unit (PICU) and to use those data to guide interventions to promote a more healing PICU environment. A secondary aim was to build a more therapeutic work environment for the interprofessional team practicing in the PICU.

Background/Significance

Today’s PICUs are not healing milieus. Immediately upon admission, the child’s routine and sleep pattern are replaced by a well-intentioned but not patient-centered PICU routine. Although criticality drives PICU therapies, much can be done to create a more healing environment that facilitates sleep and rest in critically ill patients and to provide a calmer environment for families. Benefits of environmental control also affect the interprofessional team practicing within the PICU environment.

Method

We prospectively monitored noise levels throughout our 30-bed PICU using Quietyme, a system that combines sensors and analytics to monitor environmental noise, light, and temperature. Sensors were plugged into existing electrical outlets at the head of each bed and in adjacent hallways and common areas. The sensors sent continuous data wirelessly to a central hub where the Quietyme’s dashboard provided real-time data by bed space and unit. Using Quietyme’s analytics, we describe our unit’s noise levels in decibels, reporting the range of the weekly mean; sleep opportunities (90-minute segments per 24 hours with below average noise levels), and patient disturbances (periods when sound levels rise above 65 dB).

Results

Our loudest bed space was 80.6 dB (range, 75.5-83.5 dB), our quietest was 35.6 dB (34.5-37 dB), and our average was 44.4 dB (40.9-46.6 dB). (Note: A 10-dB increase is the equivalent of doubling the volume]. The unit was quieter on nights by 1.84 dB and on weekends by 0.7 dB. Comparing areas of the unit, we identified a louder cluster of rooms (beds 16-19; 70.7 dB) and quieter area (beds 12-14; 67.2 dB). According to Quietyme analytics, nights offered twice as many sleep opportunities as days (days, n = 642; nights, n = 1186), and days had twice as many patient disturbances as nights (days, n = 14 775; nights, n = 7556). Our unit was loudest at 7 am (shift change) and between 9 am and 11 am (interprofessional rounds).

Conclusion

These data validated that our unit is often louder than existing recommendations from the World Health Organization and Environmental Protection Agency that sound in hospitals not exceed 40 and 45 dB, respectively. Quietyme provided objective data and recommendations to improve our current state. Next steps include risk adjusting these data by patient criticality and using Quietyme light data to better enable nurses to create sleep and rest opportunities for patients.

RS26 Caritas Education: Theory to Practice

Kim Rossillo; St Joseph Hospital, Orange, CA

Purpose

Caring, healing relationships are at the core of professional nursing. The purpose of this project was to design and deliver an educational seminar based on Jean Watson’s theory of human caring to newly graduated nurses to examine the effect on self-efficacy in caring behaviors.

Background/Significance

Increasingly complex occupational demands along with varied educational and personal examples of caring may affect the ability to connect deeply with patients. According to the literature, nurses and patients have differing perceptions of caring behaviors. Jean Watson’s theory of human caring provides a framework for care delivery that focuses on the caring nurse-patient relationship and the experience through the lens of the patient.

Method

The project participants (N = 56) consisted of a non-probability convenience sample of new graduate nurses at a local community hospital. The educational intervention consisted of experiential learning activities to facilitate translating theory to practice. The study used the Caring Efficacy Scale (CES), which is an instrument based on Watson’s caring theory and Albert Bandura’s self-efficacy theory. The CES was also administered 6 to 9 months after the educational intervention to measure the long-term influence of the intervention.

Results

A significant improvement was found in caring efficacy immediately after the intervention (mean CES score, 5.5; SD, 0.38) compared with before the intervention (mean CES score, 5.1, SD, 0.47; t52 = −9.09, P < .001). Key themes from the open-ended survey questions focused on the nurse-patient relationship, “seeing” the experience through the patient lens, care for the caregiver, and healing that encompasses the mind, body, and spirit. The themes inferred growth in perceived emotional competencies after the intervention. The CES survey results 6 to 9 months after the intervention demonstrated sustained perception of caring efficacy.

Conclusion

The knowledge from this study could provide insights for the development of effective teaching strategies to facilitate translating nursing theory to practice. Establishing and developing skills to facilitate nurturing, caring nurse-patient relationships may enhance the patient and caregiver experience. Engaging and developing self-care in newly graduated nurses early in their career may improve resiliency and decrease burnout.

RS27 Quantification of Early Mobility in the Intensive Care Unit: An Exploratory Evaluation of Electronic Health Record Documentation

Sarina Fazio, Amy Doroy, Natalie Da Marto, Jason Adams; UC Davis Medical Center, Sacramento, CA

Purpose

To explore how physical activity is measured in the electronic health record (EHR) and the extent to which clinical notes accurately quantify early mobility interventions among patients in the adult intensive care unit (ICU).

Background/Significance

Inactivity is pervasive among patients in the ICU and has been associated with increased mortality rate, functional decline, and cognitive impairment. Although early mobility mitigates these outcomes, there is no agreement on the optimal dose of mobility in the ICU and no consensus regarding the optimal method of measuring mobility delivery, to our knowledge. Improved measurement of early mobility is necessary to study patients’ activity and physical functioning in the ICU and guide care planning.

Method

This exploratory study compared methods of quantifying early mobility in the ICU. Participants included adults hospitalized in the medical ICU of a large, academic medical center who were eligible for early mobility therapy. Participants representing each level of early mobility according to American Association of Critical-Care Nurses guidelines were enrolled and observed for up to 24 hours. After informed consent was obtained, a video camera was mounted in the patient’s room and data were reviewed by multiple ICU clinicians to derive a gold standard of patient movement. Clinical documentation of physical activity and early mobility interventions coinciding with video recording time was extracted from the EHR for analysis.

Results

A total of 90 hours of video and EHR data from 4 patients in the ICU were recorded, reviewed, and analyzed. In the EHR, only type and frequency of physical activity could be consistently measured, compared with the clinician-annotated video, which yielded more detailed metrics of activity. Mean total activity duration was 6.3 hours; 1 ambulation episode, 6 standing episodes, 14 sitting episodes, 32 turning episodes, and 7 range-of-motion exercise episodes were observed across all participants. The highest level of agreement occurred with ambulation and sitting out of bed. Although in-bed activity accounted for the most frequently observed activity type, it could not be measured in the EHR.

Conclusion

This study illustrates important barriers to relying on EHR data to accurately quantify early mobility and patient activity in the ICU. Compared with clinician-annotated video, the EHR provides a limited snapshot of patient activity beyond highest mobility level and becomes more inaccurate when patients are less dependent on clinicians. Improved methods to measure activity of patients in the ICU are necessary to advance the study of ICU early mobility and inform delivery of more effective, evidence-based care.

RS28 Cardiac Arrest Associated With Endotracheal Suctioning After Surgery for Congenital Heart Disease

Anna Fisk; Boston Children’s Hospital, Boston, MA

Purpose

Endotracheal suctioning can lead to cardiac arrest in some children after congenital heart surgery. Study objectives were to determine the characteristics of those who experienced cardiac arrest during suctioning compared with those whose cardiac arrest did not occur during suctioning and those who experienced suctioning without cardiac arrest, to identify the changes in physiological parameters before an event, and to examine the preceding events to determine any precipitating or exacerbating factors.

Background/Significance

Current endotracheal suctioning guidelines for children with congenital heart disease (CHD) have been based on adult or general pediatric studies. Suctioning can have a significant hemodynamic effect on children with CHD and has the potential to precipitate cardiac arrest. Moreover, hospitalized children with CHD have a higher risk of cardiac arrest (7 per 1000 hospitalizations) than do children without CHD (0.54 per 1000). In this study, data from before arrest were examined to identify predictive factors.

Method

The sample included 135 events of pediatric patients who underwent CHD surgery and who experienced suctioning. Bivariate analysis was used to compare those who experienced cardiac arrest with those who did not. Then, multivariate analysis was performed on data from those who experienced cardiac arrest during endotracheal suctioning, those whose cardiac arrest did not occur during suctioning, and those who experienced suctioning without cardiac arrest. Multinomial logistic regression was conducted including all variables with a P value less than .20 from the bivariate analysis. The final model demonstrated that heart rate, chemical paralysis, intensive care unit length of stay, and survival to discharge were significant variables.

Results

Patients who experienced cardiac arrest associated with suctioning had a more than 20% change in heart rate within 30 minutes before the arrest event, indicating a possible signal of increased risk. In addition, patients who experienced cardiac arrest associated with suctioning were not chemically paralyzed, indicating a period of vulnerability compared with suctioning while chemically paralyzed; had a longer postoperative stay in the intensive care unit, thus increasing resource use; and had a higher mortality rate than did patients who experienced suctioning but did not experience cardiac arrest.

Conclusion

Based on findings from this exploratory study, there may be clinical signals indicating risk of cardiac arrest associated with suctioning. Worldwide, endotracheal suctioning is a routine procedure after CHD surgery; consequently, research to avert cardiac arrest during endotracheal suctioning is essential to improving patient outcomes. Although more prevalent in CHD, cardiac arrest is still relatively infrequent; therefore, study findings will inform a power analysis for a multisite study.

RS29 Resuscitation Science: Educating the SMART Way

Mandi Walker; University of Louisville Hospital, Louisville, KY

Purpose

To analyze the efficacy of online versus conventional instructor-led advanced cardiac life support (ACLS) education. Emergency department (ED) and intensive care unit (ICU) nurses care for highly complex, critically ill patients in a dynamic, high-stress environment. With an increasing emphasis on decreasing education time away from the bedside, it is important to understand the most effective educational modality for first-time registered nurse ACLS course participants.

Background/Significance

Prompt recognition of and appropriate response to cardiac arrest by nurses can lead to improved survival rates and neurological outcomes. Evidence shows high-quality basic life support and ACLS education can improve patients’ outcomes by decreasing code rates and increasing neurologically intact survival. The American Heart Association accepts successful completion of online ACLS education as an initial provider certification for ACLS. Online education for psychomotor and affective learning lacks evidence to show equitability.

Method

In a randomized controlled, quantitative comparison study, we evaluated scores on the ACLS written examination and performance in the simulation laboratory as megacode team leader. First-time registered nurse participants requiring a full ACLS certification course between March and September 2015 were randomly assigned to an intervention group that used online ACLS education or a control group that used simulation-based, instructor-led ACLS. Data obtained and analyzed included demographics, ACLS pretest scores, scores on the standard ACLS multiple-choice question test, binary pass or fail of a final megacode per American Heart Association standards, the scoring of the Simulated Megacode ACLS Resuscitation Tool (SMART), and course evaluations.

Results

Scores on the ACLS pretest and demographic data did not differ significantly between the 2 groups. Outcomes included no significant difference in scores on the final written test (P = .79), but a significant difference in first-attempt failure rate for the written test between the online group and the instructor-led group (P = .02). Nurses in the instructor-led group showed significantly better performance as code team leader in both the first-attempt megacode pass rate (P = .002) and SMART scores (P < .001).

Conclusion

For first-time registered nurse participants in ACLS education, a simulation-based, instructor-led ACLS course is superior to an online ACLS course for psychomotor and affective learning. Evaluations indicated that the ability to ask questions of an instructor and to have real-life stories to associate with the learning aided in knowledge acquisition. The realistic environment, team learning, and repetition of the instructor-led course resulted in understanding of and performance in the role of code team leader.

RS30 A Comparative Pilot Study of Supply Tray Management in the Medical Intensive Care Unit

Jill Kristina Kane, Kristin Hover, Carol Ritter; Christiana Care Health System, Newark, DE

Purpose

This comparative pilot study is intended to gauge whether microbial contamination differs between repeated-use and exchanged in-room supply trays. Our purpose is to explore 2 supply tray practices for the following: statistical microbial disparity between repeated-use and exchanged supply trays; correlation between supply practice methods and hospital-acquired infections; and inventory supply cost.

Background/Significance

The historical clinical practice in the 22-bed medical ICU (MICU) was to reuse in-room supply trays and contents from patient to patient without disinfection. MICU peers assumed that supply trays were a source of microbial contamination, thus posing an exposure risk to patients. Hence, an alternative practice was initiated in March 2016: with each patient turnover, used supply items would be discarded and trays would undergo disinfection and be exchanged for sanitized, newly stocked trays.

Method

This research is both a quasi-experimental study for tray microbial load comparisons and a performance improvement initiative for cost analyses. After approval by the institutional review board, data were collected from December 2016 to April 2017. In a pretest/posttest design, each supply tray and 2 tray items were subjected to adenosine triphosphate testing and blood agar growth analysis. These 2 data points will measure bioburden and provide microbial quantification of growth and broad classification of microorganisms. MICU-acquired infections were tracked by infection prevention personnel. Both supply practices had a 12-month spreadsheet tabulating supply inventory costs.

Results

Microbial contamination did not differ significantly between repeated-use and exchanged supply trays. No statistically significant relationship was found between tray practices and unit-acquired infections. Inventory costs results demonstrated a 44% increase from change in the traditional practice to alternative tray and supply practices. From March 2016 to March 2017, 1590 patient turnovers occurred, each with used supply tray items disposed into the trash and/or sharps container, at a deficit of more than $39 000.

Conclusion

Exchanged supply trays did not reduce microbial contamination or decrease hospital-acquired infections and supply costs. The MICU will return to traditional practice and continue efforts to limit microbial cross-contamination. Further research is needed to compare microbial burden between supply trays and other high-touch surfaces (ie, bed rails, cardiac monitors) and to contrast microbial burden between in-room supply trays and supply trays that are located outside of patient care rooms.

Footnotes

Presented at the AACN National Teaching Institute in Boston, Massachusetts, May 21-24, 2018.