RS23 Factors Related to Nurses’ Comfort Managing Pain at the End of Life: A Multicenter Survey

Lynn Mackinson, Sharon O’Donoghue, Priscilla Gazarian, Kathleen Turner; Beth Israel Deaconess Medical Center, Boston, MA

Purpose

Symptom management in the intensive care unit (ICU) at the end of life (EOL) requires specialized knowledge and skill, with nurses playing an integral role in providing quality patient/family-focused care. The study purpose was to identify factors related to ICU nurses’ comfort with one domain of EOL care, pain management, including knowledge of guidelines, education, and experience with EOL care. A second goal was to compare documentation of pain management at the EOL with adherence to institutional guidelines.

Background

One-fifth of Americans die during or shortly after an ICU stay, with patients often suffering pain at the EOL. Mularski et al identified 18 metrics to assess quality EOL care, including 2 that address pain assessment and management. Inadequate education has been identified as a barrier to quality EOL care. Clinicians believe they understand symptom management, but an unaddressed symptom burden often exists. This study aims to determine what factors affect nurses’ comfort with EOL pain management.

Method

A multicenter survey of 270 ICU nurses at 4 academic medical centers (Beth Israel Deaconess Medical Center, Brigham and Women’s Hospital, University of California San Francisco, and Intermountain Medical Center). Nurses working in the participating ICUs during a 2-week period were eligible. General trends are described with higher scores (on a scale of 0 to 100) indicating more comfort with pain management. Each site also analyzed records of 12 ICU patients who transitioned to EOL care. Data, including documentation of pain and medications administered, were retrospectively extracted from the medical record to evaluate practice patterns and adherence to EOL guidelines.

Results

Overall, nurses reported a high degree (median score, 88) of comfort assessing and managing pain. Only 48.9% reported awareness of their institution’s EOL guidelines, which did not correspond to greater comfort (median score 90 if aware vs 87 if unaware; P = .30). Nurses reporting more experience in caring for EOL patients (> 1 patient/month) tended to report greater comfort than did nurses caring for EOL patients several times a year (scores 92 vs 86; P = .09). Nurses reporting > 4 hours of EOL education reported higher comfort (scores 87 vs 81; P = .06). Despite perceived comfort, on 40% of days in which terminal pain was present, documented assessment and treatment of pain did not follow institutional EOL guidelines.

Conclusion

Nurses reporting more experience caring for patients at the EOL and those who reported > 4 hours of EOL education indicated higher levels of comfort with assessing and managing pain. No association was observed between comfort and awareness of EOL guidelines. A discrepancy exists between nurses’ high degree of reported comfort managing pain and pain management within institutional guidelines. Further EOL education and guideline-targeted training may help to bridge this gap.

RS24 Impact of Moral Distress on Perceptions of Work Environment and Patient Safety

Cathy Hiler; Case Western Reserve University, Cleveland, OH

Purpose

The AACN emphasizes the significance of a healthy work environment on patient safety, quality of care, and nurse retention. The purpose of this study was to determine the relationship of moral distress experienced by critical care nurses, the perceived practice environment, and the effects on patient safety.

Background

In the critical care environment, registered nurses provide care for severely ill patients and are predisposed to many psychological and physical dilemmas. Such dilemmas have the potential to create moral distress. Moral distress in registered nurses is known to cause decreased job satisfaction, turnover in staffing, burnout, and stress, therefore having a negative impact on the quality and safety of patient care.

Method

The study used a descriptive correlational research design that consisted of a convenience sample of critical care nurses. Participants were recruited via AACN’s e-newsletter and social media sites. Two self-report questionnaires; the Moral Distress Scale-Revised and the Practice Environment Scale of the Nursing Work Index (PES-NWI) were used for this study. Bivariate correlational analyses using Pearson product moment correlation coefficients identified the relationships between the study variables, in this case the relationship between moral distress and perceptions about the nurses’ practice environment and patient safety.

Results

A total of 326 participants completed the survey. Correlation of moral distress with the 5 subscales in the PES-NWI revealed the following; Moral Distress-Nurse Participation in Hospital Affairs, r(326) = −0.388, P < .001, Moral Distress-Nursing Foundations for Quality Care, r(326) = −0.400, P < .001, Moral Distress-Nurse Manager Ability, Leadership, and Support of Nurses, r(326) = −0.354, P < .001, Moral Distress-Staffing Resource Adequacy, r(326) = −0.414, P < .001, and Moral Distress-Collegial Nurse Physician Relations, r(326) = −0.378, P < .001. The findings are statistically significant for the impact that moral distress has on perception of practice environment and patient safety.

Conclusion

This study has significant implications in that it offers an appraisal of moral distress and its relationships with practice environment and the quality and safety of patient care as perceived by a national sample of critical care nurses. The results of are expected to inform strategies to minimize moral distress and attenuate its impact on patient outcomes.

RS26 Surveillance: A Nursing Intervention for Improving Patient Safety in Critical Care

Dale Pfrimmer, Vickie Ernste, Lori Rhudy, Maren Johnson; Mayo Clinic, Rochester, MN

Purpose

Little is known about how critical care nurses process the vast amounts of information or cues encountered in actual clinical situations. Further research is needed to increase and deepen the understanding of information processing by critical care nurses and its relationship to clinical decision making. The purpose of this study is to explore the nursing intervention of surveillance from an information processing framework in the critical care environment.

Background

Nursing surveillance has been identified as a key intervention in early recognition and prevention of errors and adverse events. Nursing Intervention Classification (NIC) defines surveillance as “the purposeful and ongoing acquisition, interpretation, and synthesis of patient data for clinical decision-making.” Because nurses are the main staffing constant in the ICU environment of high-intensity interventions and complex assessments, the importance of surveillance as an intervention is fundamental.

Method

This study was conducted using a descriptive exploratory research design. Think aloud was used for data collection. Twenty-one registered nurses from 3 ICUs participated in the study. Participants were asked to say out loud whatever they were thinking as they performed patient care at 3 time points; as they received handoff, during their initial assessment, and after 4 hours. Think aloud represents the information (cues) that is attended to in short-term memory, before it has been processed and stored. Data were analyzed by using content analysis with key concepts and themes identified.

Results

The overarching theme was “finding meaning” with analysis/synthesis of the information being key for nurses putting together the bigger patient care picture. The main themes identified are “pulling it all together,” “making sure,” and “thinking ahead.” The process of surveillance was evident and specifically expressed during the shift-to-shift handoff, with surveillance primarily demonstrated by the receiving nurse asking questions that helped clarify incomplete, incomprehensible, or missing information. Surveillance included synthesis of patients’ cues to assess the entire length of stay and anticipate patients’ needs both during and after ICU stay.

Conclusion

Nurse interaction during shift-to-shift report is not simply for receiving information but for clarifying and synthesizing data. Creating environments conducive to asking questions and clarifying accuracy of understanding is imperative, especially for less experienced nurses. Nurses’ cognitive work in shift-to-shift handoff is underrecognized. The importance of valuing (eg, protecting time and space) for this cognitive work to occur and limiting disruptions is important.

RS28 Outcomes of Ventilator-Associated Conditions: Length of Stay, Duration of Ventilation, and Mortality

Mary Lou Sole, Lara Deaton, Aurea Middleton, Melody Bennett; University of Central Florida, Orlando, FL

Purpose

Critically ill patients receiving mechanical ventilation are routinely assessed for ventilator-associated conditions (VAC). VAC surveillance was introduced in 2013 to assess for infectious and noninfectious complications of ventilation. Early research identified worse outcomes for those in whom VAC develops. In this study, length of stay (LOS), ventilator hours, and mortality were evaluated according to VAC status. We hypothesized that outcome variables would be greater in patients with VAC.

Background

Before VAC surveillance, patients were assessed for ventilator-associated pneumonia (VAP). VAP definitions relied on subjective clinical criteria and lacked sensitivity and specificity. The concept of VAC was introduced to provide an objective approach to assess for complications of ventilation that begin with worsening oxygenation. Early research supporting a change in surveillance showed longer intensive care unit and hospital LOS, longer duration of ventilation, and higher mortality in patients with VAC.

Method

A retrospective, descriptive comparative study was conducted from data collected for a clinical trial to reduce aspiration. Inclusion criteria included > age 18, orally intubated for < 24 hours at enrollment, and expected to be intubated for at least 36 hours. Exclusion criteria included documented aspiration at intubation, rescue ventilation therapy (eg, oscillator), oral injuries, and history of head or neck cancer. Upon completion of the study, participants were assessed for the presence of VAC by using the National Healthcare Safety Network (NHSN) criteria and were classified as having VAC or no VAC. If VAC was present, the NHSN algorithm was used to identify infection-related etiologies.

Results

Data from 135 subjects were available for analysis. The mean (SD) age of patients was 55.1 (19.2) years. The majority were male (61%), white (73%), and non-Hispanic (81%). Patients had a variety of primary diagnoses, with medical issues occurring most often (41%), followed by 32% trauma, 23% neurological, and 4% surgical. Twenty subjects (15%) had VAC develop. Of these, 13 were infection-related with 3 classified as possible VAP. Patients with VAC had 5.1 more days in the ICU (P = .01) and 128 more hours of ventilation (P < .001) compared with patients who were VAC-free. Hospital length of stay was 1.6 days longer (P = .74), and mortality was 67% for those with VAC (compared with 37%; P = .05).

Conclusion

Findings mirror earlier studies comparing outcomes of patients with and without VAC. As patients with VAC have more negative outcomes, prevention strategies beyond the traditional “ventilator bundle” are important. VAC has many causes: atelectasis, fluid overload, acute respiratory distress syndrome, and VAP. Additional prevention strategies include astute pulmonary assessment, turning and mobility, airway clearance, alerts for imbalanced intake and output, and infection control.

NIH Funding for project: 1 R01 NR014508-01A1.

RS1 Echocardiography in the Intensive Care Unit: An Educational Intervention for Advanced Practice Provider Students

Tonja Hartjes, Philip Efron, Rohit Patel, Marina Trevisani; University of Florida, Gainesville, FL

Purpose

To evaluate the effectiveness of an online educational module for performing and interpreting point of care (POC) transthoracic echocardiography (TTE) in the intensive care unit. Participants include nurse practitioner (NP), physician assistant (PA), and anesthesia assistant (AA) students. The knowledge gained from this study will provide a basis for incorporating this training curriculum into educational programs for all advanced practice providers (APPs) within the study facility.

Background

Hemodynamic monitoring is a cornerstone of management of critically ill patients. Historically, invasive methods have been used. Over time, this approach has fallen out of favor because of potential harm of patients and inherent inaccuracy. POC TTE is a low-risk means of evaluating volume status and cardiac function in critically ill patients that aids in clinical decision making. As a result, there remains a role for increased exposure to TTE within APP training programs.

Method

This prospective observational cohort study using APP students on a surgical ICU was approved by the institutional review board. Ten volunteer NP and PA student subjects provided consent and were e-mailed a secure Internet link to a 10-question multiple-choice pretest via REDCap. They were then provided an online narrated presentation on how to obtain and interpret 3 basic TTE views. Instruction included video clips of both normal and abnormal cardiac tissue and education for interpretation was included in each view. Knowledge gained was measured by comparing median test scores from pretest to posttest by using the Wilcoxon signed rank test.

Results

Fifteen students consented to participate (11 NPs, 3 PAs, 1 AA). Of these, 5 students completed the pretest, but not the posttest. Ten students were included in the final analysis (8 NPs, 2 PAs). In evaluation of the benefit to APP training, the median test scores improved significantly from 50% to 90% (related-samples Wilcoxon signed rank test, P = .07). On the posttest, 9 students scored 80% or higher. In a Likert survey, the majority of students strongly agreed the intervention easy to understand, useful to their area of study, pertinent to their practice or future practice, and that they are likely to apply the knowledge gained in the future.

Conclusion

The use of a brief online educational intervention can significantly increase NP and PA students’ knowledge of how to obtain and interpret 3 basic POC TTE views. Limitations of the study include a small sample size as well as the lack of a hands-on training component. Implications for the future include incorporating similar educational modalities into APP training programs, as well as integration of POC TTE training into medical student, residency and fellowship programs.

RS2 Using a New Evidence-Based Trauma Protocol to Improve Care of Patients with Blunt Cardiac Injury

Ilean Genrich, Suela Sulo, Susan O’Mara; Advocate Lutheran General Hospital, Park Ridge, IL

Purpose

To evaluate the effectiveness of a new, evidence-based trauma protocol implemented at Advocate Lutheran General Hospital. Patients who potentially sustained a blunt cardiac injury (BCI) as a result of blunt thoracic trauma (BTT) were included. Goals of the new protocol were to accelerate identification of this diagnosis, reduce resource consumption with more efficient use of laboratory tests, and standardize the response when a BCI was identified.

Background

To improve care provided to BTT patients at our institution, evidence from published studies that examined methods for BCI identification was evaluated. Diagnostic screening tool selection, including cardiac biomarkers, electrocardiography (ECG), echocardiography, and nuclear studies and their timing are important in this evaluation. This review demonstrated that some areas in existing practice were outdated and resulted in the development of a new evidence-based trauma protocol.

Method

A comparative design studied 80 patients prospectively treated with the new trauma protocol compared with the medical records of 80 former patients treated with existing practice. A data collection form recorded required variables. Descriptive statistics described sample and treatment characteristics and outcomes in the 2 groups. The primary end point, reported duration of ECG monitoring, was compared between groups by using the Student t test. The Student t test, χ2 test, or Fisher exact test were used to compare groups by demographic characteristics and diagnostic findings. The phi test was used to assess the correlation between troponin I levels and echocardiography results. Analyses were conducted by using SPSS 20.0.

Results

The effectiveness of an evidence-based trauma protocol whose aim was to improve BCI identification to improve patient care and reduce unnecessary studies was evaluated. Implementing the protocol improved detection of abnormal troponin I levels and proved cost-effective. The protocol created a 6-hour time frame after a BCI patient had been identified in which troponin I levels were evaluated, allowing an efficient response to an abnormality. The length of time that inpatients required continuous ECG monitoring decreased by 4.23 days and echocardiography use decreased by 70% in patients with normal troponin I levels, resulting in significant cost savings at Advocate Lutheran General Hospital.

Conclusion

A new evidence-based trauma protocol brought improved and cost-effective care to BCI patients at our facility. A 6-hour time frame for troponin I evaluations resulted in more timely interventions. Unnecessary laboratory tests were eliminated. Patients whose troponin I levels and ECG findings were normal were placed in a non-monitored setting, and echocardiography use was reduced to a defined population. Care of BCI patients has been enhanced with this protocol.

RS3 Therapeutic Hypothermia Protocols After Prehospital Cardiac Arrest

Jessica Wyse, Molly McNett; MetroHealth, Cleveland, OH

Purpose

To investigate the effects of initial implementation of a therapeutic hypothermia (TH) protocol on patient mortality, hospital length of stay, and discharge disposition among patients who experienced out-of-hospital cardiac arrest (OHCA). Secondary aims were to evaluate degree of protocol compliance to identify nurse-specific barriers and solutions for routine integration into practice.

Background

More than 300 000 individuals experience OHCA, with survival rates consistently less than 10%. TH is one strategy to mitigate adverse effects of OHCA, as studies have shown positive outcomes for mortality and neurological function. However, integration of TH protocols into routine practice can be challenging. Little research has investigated immediate effects of initial TH protocols on patient outcomes and compliance from a nursing perspective.

Method

A retrospective cohort design was used to gather data from all patients experiencing OHCA before and after implementation of a TH protocol within a large academic public hospital. The sample included all patients more than 18 years old experiencing OHCA. Demographic and clinical data were abstracted from medical records of both TH and non-TH groups. Additional compliance data were gathered on the TH group. Outcome variables included hospital mortality, length of stay (LOS), and discharge disposition. Data were analyzed by using descriptive statistics; χ2 and t tests were used to compare outcomes between groups. Logistic regression modeling techniques were used to determine predictors of mortality.

Results

In 259 patients, significant differences were found between non-TH/TH groups for age (59.9 vs 52.9 years, P < .05), intensive care unit LOS (0.86 vs 5.3 days, P < .001) and hospital LOS (1.64 vs 5.05 days, P < .01). Mortality significantly decreased after protocol implementation (89.4% vs 75%, P < .05), which was supported in regression analyses (P = .05, odds ratio = 2.8). A higher proportion of patients were discharged home after TH protocol (5.1% vs 21.5%, P < .05). Full protocol compliance was documented 30% of the time. Reasons for early termination included completion of all elements (40%), difficulty maintaining temperature (15%), and shivering (5%). Protocol documentation was inconsistent across units and disciplines.

Conclusion

Findings suggest that even initial implementation of TH protocols can result in positive outcomes for patients, as evidenced by decreased mortality rates and an increased proportion of patients discharged home after a TH protocol was initiated as the standard of care. Full compliance with protocols remains difficult. Critical care nurses are integral to initiation and adherence of TH protocols and must play an integral role in addressing potential barriers and identifying strategies for improved compliance.

RS4 Evaluation of Postextubation Nursing Bedside Swallowing Screening in Cardiovascular Intensive Care Unit

Regina Freeman, Milo Engoren, Margaret Tiner, Shandra James; University of Michigan, Ann Arbor, MI

Purpose

To evaluate if a bedside swallowing screening tool used by nurses to assess poststroke swallowing impairments could be used in the cardiovascular intensive care unit (CVICU) after extubation to determine swallowing impairments and reduce adverse events. Outcomes were to reduce adverse events related to swallowing complications without increasing the number of consultations and the workload of speech language pathology (SLP).

Background

Impaired swallowing and aspiration are potential complications after extubation. Several aspiration events resulted in reintubations of CVICU patients. A root-cause analysis was completed. A literature review was conducted to identify tools to assess postextubation swallowing. Although the literature outlines swallowing complications after extubation, no validated tools exist for evaluating swallowing after extubation. Multiple swallowing screening tools exist for poststroke patients.

Method

The bedside nurse screened and evaluated all patients after extubation by using a modified version of the swallow screening tool. Retrospective chart reviews were conducted on 446 adult patients who were admitted to the CVICU between January 2013 and August 2013. Data collection included demographics, date and time of intubation, extubation, reintubations, swallowing screening results, and consultations with SLP. Explore, frequency, and descriptive procedures were used to derive descriptive statistics. Manual chart reviews were conducted on all reintubated patients to determine the rationale for reintubation. Chart reviews were also conducted for all SLP referrals to determine recommendations and outcomes.

Results

A strong correlation was found between length of time intubated and initiation of a swallowing screening (P < .007). There were no statistically significant associations between patient’s sex and results of swallowing screening or reintubation rates. Age was not a significant predictor of either reintubation or results of swallowing screening. By volume of cases, patients after lung transplant were most likely to have an unsuccessful swallowing screening. A total of 36 patients were reintubated; however, none of the reintubations were related to aspiration. Thirty-three patients failed the swallowing screening and 24 of these patients had consultations with SLP. SLP did not appreciate an increase in consultations after the initiation of the swallowing screening.

Conclusion

The nursing bedside screening used to evaluate poststroke swallowing impairments at our institution is safe and reliable to use after extubation with CVICU patients. Bedside swallowing screening did not increase false-positive referrals to SLP, yet enabled patients at risk for swallowing impairment complications to be identified. A simple bedside swallowing screening after extubation can prevent further complications associated with impaired swallowing such as aspiration, pneumonia, reintubation, and death.

RS5 Cardiac Nurses Perceived Self-Efficacy Regarding Patient Education

Gayelynn Allen; Saint Luke’s Hospital of Kansas City, Kansas City, MO

Purpose

To examine cardiac nurses’ self-efficacy beliefs about their ability to provide patient education; identify factors that have enhanced or have been detrimental to their self-efficacy beliefs; and discover factors they think could improve their self-efficacy beliefs.

Background

Although many studies have shown that nurses think that education of patients is an important nursing responsibility, few studies have examined nurses’ self-efficacy beliefs about their ability to provide education to patients. Positive self-efficacy regarding providing patient education enhances the quality and effectiveness of a nurse’s patient teaching activities and thus better prepares patients to manage their own care after hospital discharge in order to avoid complications and possible readmission.

Method

A cross-sectional, descriptive, mixed-methods pilot study of cardiac nurses at 4 acute care hospitals within a single Midwest health system. Data were collected from 73 nurses by using an online survey developed by the principal investigator regarding their self-efficacy beliefs about providing education to cardiac patients. The survey was examined by cardiac nurses and education specialists to determine face and content validity. The data collection instrument consisted of 23 Likert-type response questions, 6 contingency questions, 4 short answer questions, and 6 demographic questions. The survey remained open for approximately 3 months.

Results

Results indicated that cardiac nurses have positive self-efficacy beliefs about their patient education abilities, they consider patient education a high priority, and they enjoy providing patient education. Nurses felt strongly that their teaching helps patients take care of themselves after leaving the hospital. Most nurses indicated that practice, experience, and observing and modeling the teaching behaviors of others were the factors that had the most impact on building their self-efficacy. When asked to identify things that could improve their ability to provide education, many listed more education and several requested education about teaching methods and how to teach.

Conclusion

Educators and administrators must ensure that methods are available to provide nurses with opportunities to acquire patient education self-efficacy: Teach patient education basics in undergraduate nursing programs, make patient education an integral part of a hospital’s nursing orientation or nurse residency programs, and include information in staff development classes about what the patient needs to know and how the nurse can provide that information to the patient.

RS6 What Factors Are Associated With Development of Pressure Ulcers in a Medical Intensive Care Unit?

Lisa Harrison, Inge Smit; University of Virginia Health Systems, Charlottesville, VA

Purpose

Instruments used to determine the risk of pressure ulcer development are universally applied to adult patients. These instruments do not differentiate between intensive and acute care patients.

Background

Pressure ulcers contribute to negative outcomes such as increases in pain and discomfort, risk of infection, and hospital length of stay and costs, as well as a decrease in quality of life. Currently, intensive care unit clinicians in various hospital settings use risk assessment instruments to predict pressure ulcer development. Appropriately identifying the risk factors is paramount to implementing a targeted care plan to avoid pressure ulcer development and/or to facilitate healing of pressure ulcers.

Method

A 15-month retrospective chart review of patients with pressure ulcers in a medical intensive care unit was performed. Statistics were computed on demographics and variables of interest, including pressure ulcer stage, vasopressor infusion, oxygen requirement, comorbid conditions, primary diagnosis, length of stay, mortality, Braden scores, and albumin level. The purpose of this study was to identify factors associated with pressure ulcer development in a medical intensive care unit.

Results

The characteristics of 76 patients who had pressure ulcers develop were evaluated. An equal number of men (38) and women (38) were included. Forty-seven percent had a stage II pressure ulcer. The presence of hemodynamic support with vasopressor administration (P = .02) and the length of stay (P = .02) were noted as the most significant factors for pressure ulcer development in this study.

Conclusion

Vasopressor use and length of stay are unaccounted factors in current instruments for assessing pressure ulcer risk. The administration of vasopressor support and patient length of stay are potential contributory factors that should not be overlooked and need to be considered when performing pressure ulcer assessments. Pressure ulcer instruments that are specific to the intensive care unit population are warranted and should include the unique characteristics of the critically ill patient.

RS7 Blended Progressive Care and Medical Surgical Nursing Review Course: Increasing Certification, Containing Cost, and Improving Quality

Paul Wong, Mary Myers; National Institutes of Health, Bethesda, MD

Purpose

To evaluate the effect of a nursing specialty certification review program on nurses’ didactic knowledge related to medical surgical and progressive care nursing standards of practice, determine the cost-effectiveness of developing an education intervention versus an external intervention for nursing specialty certification examination, and examine the effectiveness of a certification preparation program on specialty certification achievement.

Background

The Medical-Surgical Specialties Service (MSS), consisting of medical-surgical and progressive care nursing units, had 37 specialty-certified registered nurses, which represented 28% of the current nursing staff. Research has demonstrated a link between nursing certification and nursing-sensitive quality indicators. To these ends, the Service Educators designed a medical-surgical and progressive care nursing certification review course based on certification test blueprints.

Method

This study represents a pre-post assessment of a blended specialty certification review course. Participants were a convenience sample of nurses (n = 42) in a federal research hospital from March 2014 to December 2014. Instruments included a demographic data questionnaire, a knowledge assessment test (pretest), and a knowledge assessment test (posttest). Raw and percentage scores of pretests and posttests were evaluated; mean differences in scores were calculated by using paired t tests; costs between and external and internal courses were compared; and participants’ success rate on nursing specialty certification examinations were reviewed.

Results

Statistically significant differences in knowledge level were found after nurses participated in the focused education intervention for certification preparation. On average, the certification review course improved knowledge assessment by 11% on the medical surgical examination and 14.6% on the progressive care examination. Additionally, the course increased the total number of certified nurses by 16.2% while generating a potential net savings of $124 500 in comparison to an external intervention.

Conclusion

The course increased the number of certified nurses in the MSS from 37 to 43 within 1 year (16.2%). The data are reflective of future success in that post-course assessment score of 84% translates to a 95% probability of successful certification achievement when attempted within 6 months of course attendance. This course is a didactically effective, cost-efficient alternative to external review courses, and a prospective quality improvement tool.

RS8 Implications of Interarm Blood Pressure Differences in Patients Admitted to Critical Care Units

Jayne Rosenberger, Susan McCrudden, Nancy Albert, Lu Wang; Hillcrest Hospital, Mayfield Heights, OH

Purpose

AACN recommends interarm blood pressure differences be measured at admission on all patients admitted to adult intensive care units (ICUs; level of evidence: expert opinion). Therefore, the purposes of this research study were to determine differences between the arms in systolic/diastolic blood pressure (SBP/DBP) measurements obtained simultaneously (SIM) and sequentially (SEQ) on admission, examine if patient factors predicted interarm differences in blood pressure, and determine if clinical outcomes varied by interarm blood pressure differences.

Background

A discrepancy in standard blood pressure monitoring practices was found. The coronary-care ICU measured blood pressure in both arms sequentially at admission, and the cardiovascular-surgery and medical-surgical ICUs measured blood pressure in 1 arm at admission. The study participants were healthy adults, ambulatory patients with hypertension, and adults with chronic diseases. No evidence was available on clinical outcomes of an ICU stay based on interarm blood pressure differences.

Method

A prospective, comparative design was completed in 3 ICUs of a 500-bed, tertiary-care hospital. Of 424 adult ICU patients, mean patient age was 67.6 (SD, 17.7) years. Patients were excluded if blood pressure could not be measured in both arms (eg, in a patient with a shunt or recent mastectomy) or if patients refused. At admission, simultaneous, and then sequential inter-upper arm admission blood pressures were measured by using standard practices by staff members caring for patients. Patients’ characteristics, ICU/hospital length of stay (LOS), and discharge disposition were abstracted from the hospital’s administrative databases. Multivariable logistic models were created to determine if interarm blood pressure differences predicted clinical outcomes.

Results

Mean blood pressures were < 1 mm Hg different between arms. Prevalence of interarm differences > 10 mm Hg in SIM/SEQ readings were 31.8%/35.1% (SBP) and 13.4%/17.7% (DBP) and differences > 15 mm Hg were 17.9%/19.8% (SBP) and 5.9%/7.8% (DBP). Older age, nonmarried status, and hypertension history were associated with higher interarm blood pressure differences. When SIM interarm blood pressure differences were > 10 and > 15 mm Hg, discharge home was less likely (> 10 mm Hg difference SBP/ DBP P = .01/P = .01; > 15 mm Hg difference SBP/DBP P = .004/P = .002). There was a 79% risk reduction of discharge home when SIM interarm DBP differences exceeded 15 mm Hg (P = .009). Interarm differences in SEQ blood pressures were not associated with clinical outcomes.

Conclusion

Simultaneous interarm DBP measurement differences > 15 mm Hg at ICU admission predicted discharge home status; however SEQ interarm blood pressure differences were not associated with outcomes. Obtaining SIM interarm blood pressures is simple and expedient, but requires extra equipment. Nursing interventions should be developed when SIM interarm blood pressure differences at admission exceed 15 mm Hg. Care coordination may be needed to facilitate discharge home.

RS9 Identification of Risk Factors for Bleeding in Patients After Percutaneous Coronary intervention

Joan Pool; Saint Luke’s Hospital, Kansas City, MO

Purpose

To examine the risk factors associated with bleeding risk after percutaneous coronary intervention (PCI) among patients in the Saint Luke’s Health System metro facilities. Our secondary purpose was to validate the bleeding risk model by using the ePRISM tool. Patients’ designation as low, moderate, or high bleeding risk will be incorporated into an algorithm to identify optimal patient flow to a overnight recovery unit, prep and recovery unit, or telemetry unit after the procedure.

Background

Plans to relocate cardiovascular procedural units and reduce bed capacity in January 2016 necessitate a change in current nursing practice. This raised concerns regarding the location, flow, and care of patients before and after the procedure. The ePRISM tool is currently used by physicians to predict the bleeding risk for PCI patients but nurses have not used this information. Bleeding risk must be considered as a criterion for making decisions regarding patient care after PCI.

Method

A cross-sectional, retrospective, descriptive, correlational design was used to examine data on 8045 patients from the National Catheterization Data Registry (NCDR) after diagnostic catheterization (DC) and PCI from our 4 metro hospitals from 2009 through 2014. Saint Luke’s Health System is one of the data registry sites for NCDR, so we were able to use the established data base to obtain bleeding risk information on our patient population to compare with the predicted bleeding risk.

Results

The predicted bleeding risk from ePRISM for the 6-year time span ranged from 0.024 (SD, 0.025) to 0.026 (SD, 0.028). Our actual bleeding rate at Saint Luke’s Health System was from 2.11% to 0.94% for 6 years, with the lowest rates during the past 2 years (0.94%–1.10%). Comparing predicted rates with observed rates indicates that the ePRISM tool slightly overpredicted the bleeding rate across the 4 metro hospitals in our system for the given time period. Both the predicted and the actual bleeding rates were below the national norm. The top 6 comorbid conditions identified as risk factors for bleeding risk after PCI were consistent with those reported in the literature.

Conclusion

This study validated the ePRISM bleeding risk model. This tool can be used by nurses to predict patients at highest risk for bleeding complications after PCI. However, given that our bleeding rate is very low and even below the national norm, we can safely say that the bleeding risk score should be just one aspect of a patient flow algorithm. Other factors such as conscious sedation, frequent groin checks, procedural complications, and anticoagulation therapies also should be considered.

RS10 Peer Support for the Second Victim Sponsored by Local AACN Chapter: A Pilot Study

Sara Warth, Pamela Minarik; Samuel Merritt University, Oakland, CA

Purpose

To evaluate a 1-to-1 peer support program for the second victim offered through a local chapter of the AACN. Does peer support lead to the development of enhanced coping skills and lessen the impact of an adverse event?

Background

The registered nurse who has committed a medical error can experience a cascade of problems that have mental, emotional, and physical effects. This experience has been termed the second victim phenomenon. Although the phenomenon is described frequently in medical literature, effective treatments for the sufferer have yet to be evaluated. Intentional 1-to-1 peer support offered through a professional organization may provide effective treatment.

Method

A feasibility study offering a 1-month peer support intervention within the local AACN chapter was completed. Fourteen nurses who work in adult intensive care units self-reported symptoms of second victim phenomenon and participated in the study. Specific outcomes measured include the development of coping skills and a reduction in the distress associated with a committed medical error by the individual. Evaluation tools used were the Brief COPE Inventory, and the Impact of Event Scale–Revised. The R program was used for statistical analysis.

Results

Four domains on the Brief COPE Inventory demonstrated a statistically significant change from the pretest to the posttest: self-blame, religion, planning and venting. There was no statistically significant change on the Impact of Event Scale–Revised.

Conclusion

Results suggest that 1-to-1 peer support offered through a professional organization may be beneficial to the individual with second victim symptoms. Further research is necessary to evaluate the effectiveness of peer support programs for the second victim.

RS11 Risk Factors for Ventilator-Associated Pneumonia Among Trauma Patients With and Without Brain Injury

Anastasia Gianakis, Molly McNett, Dawnetta Grimm, Cristina Moran; MetroHeatlh Medical Center, Cleveland, OH

Purpose

Research has identified risk factors for ventilator-associated pneumonia (VAP), and implementation of bundles has improved rates. Yet VAP rates remain elevated among critically ill trauma and brain-injured patients. Aims of this study were to (1) identify risk factors for VAP among critically ill trauma patients with and without brain injury who were undergoing mechanical ventilation; (2) differentiate VAP prevalence among critically ill trauma patients with and without brain injury treated in the same intensive care unit.

Background

VAP is a leading cause of hospital-acquired infections. Research identifies risk factors for VAP, and critical care nurses are instrumental in evidence-based prevention efforts. However, little research identifies causative factors for VAP among critically ill trauma or brain-injured patients, who typically experience higher rates. Research is needed to identify contributing factors for VAP among these high-risk populations in order to guide preventative efforts by nurses to decrease VAP.

Method

A retrospective case control study design was used. Adult trauma patients admitted to the intensive care unit during a 12-month period at an urban academic level I trauma center were included in the study. Subjects included trauma patients with brain injury (cases) and without brain injury (controls). Data were abstracted from a respiratory database, trauma registry, electronic medical records, and quality department VAP reports. Variables included demographic information, presence of VAP risk factors, and daily clinical data. Outcome variables included VAP (defined using criteria from the quality department at the Centers for Disease Control and Prevention), number of ventilator days, and hospital and unit length of stay.

Results

The study had 157 patients (76 cases, 81 controls). Cases and controls were similar in age, sex, severity of injury, number of ventilator days, and length of stay. Trauma patients with brain injury had a higher proportion of emergent (P < .001) and field (P < .001) intubations than controls had. VAP rate for patients with brain injury was slightly higher than for controls (11.8% vs 11.1%). The strongest predictor of VAP among cases was younger age (odds ratio [OR] = 2.15, 95% CI = 1.17–4.13, P = .02), whereas number of ventilator days was the best predictor of VAP in controls (OR = 1.4, 95% CI = 1.12–1.81, P = .004).

Conclusion

Findings contribute information on VAP prevalence and variation in risk profiles among trauma patients with and without brain injury. Both populations are at high risk for VAP, which may be due to factors not amenable to nursing preventative efforts, such as patient’s age, injury type, and location of intubation. Prospective studies are needed to validate initial findings and determine if VAP rates remain a reliable indicator of quality of care in trauma and brain-injured patients.

RS12 Facilitation of Consistent Communication of New Medicines

Lisa Cossaboon; Inspira Health Network, Vineland, NJ

Purpose

To develop a consistent process to improve the patient satisfaction domain in the communication of new medicines at Inspira Health Network (IHN). Would a nurses medication classification teaching pocket guide and identical patient index cards listing the most commonly prescribed therapeutic categories of medicines, top side effects, and purpose of the medication improve communication about medications?

Background

Patient satisfaction scores in the communication of medicines domain in the Hospital Consumer Assessment Healthcare Providers Survey (HCAHPS) were below the national benchmark average and were low in percentile ranking. Along with public reporting, the HCAHPS scores are linked with Medicare reimbursements in hospitals. Numerous patient education materials were available for nurses’ use in patient teaching; however, inconsistent teaching methods were reported across the organization.

Method

The interventions were the use of a classroom educational presentation; an easy to use, color-coded, systems-based medication classification pocket guide for nurses; and identical index cards for patients. This quantitative study was conducted by using a convenience sample population of registered nurses on 2 medical-surgical nursing units on 2 separate campuses with patients at least 18 years old. Retrospective analysis of the HCAHPS domain scores was were compared for 6 months before and for 6 months after the intervention. Reliability and validity were evaluated by using a t test. This study was submitted to the institutional review board and deemed exempt.

Results

Clinical significance was shown through the HCAHPS scores before and after implementation, by discharge date. One unit’s scores were in the 1st percentile ranking in comparison with the New Jersey Peer Group database before implementation and increased to the 92nd percentile ranking after implementation. The second unit’s scores were in the 4th percentile ranking before implementation and increased to the 25th percentile ranking after implementation. Patient surveys received indicated a large variation in one of the units, showing less of a statistical significance (P = .30); however, combined scores showed greater statistical significance in the communication of the side effects (P = .008).

Conclusion

The provision of a nurse medication classification pocket guide, identical patient index cards, and medication communication educational sessions improved consistency among nursing staff, leading to improved recall of teaching by patients. Limitations of the study included reassignment of staff from nonintervention units; the intervention unit staff reinforced the tools and project details. Further research would be essential in determining usefulness of these tools in decreasing readmission rates.

RS13 Breaking Bad News: A Novel Approach to Communication Training in Adult Gerontology Acute Care Nurse Practitioner Intensivist Students

Megan Shifrin, Brian Widmar, Nathan Ashby, Jill Nelson; Vanderbilt University School of Nursing, Nashville, TN

Purpose

Previous communication training methods in the intensivist adult gerontology acute care nurse practitioner (AGACNP) courses have consisted of only simulation and debriefing. The purpose of this pilot study was to determine if adding a reflective experience from a family member’s time with a loved one in the intensive care unit (ICU) and a palliative care lecture before simulation-based communication training would assist students in learning how to “break bad news” to patients’ significant others.

Background

Effective communication skills are identified as an educational and practice competency for AGACNP students. High-fidelity simulation and ICU clinical rotations are used to prepare AGACNP students in the intensivist subspecialty at the Vanderbilt University School of Nursing for clinical practice in ICUs. However, evidence to support how to educationally prepare intensivist AGACNP students to lead difficult conversations with ICU patients and patients’ significant others is minimal.

Method

The educational experience consisted of 3 components and was followed by a voluntary, anonymous electronic survey after the experience: (1) The reflective experience of an ICU patient’s family member regarding the impact of the health care team’s communication; (2) An instructional session on how to lead difficult conversations taught by a palliative care nurse practitioner and moderated by a multidisciplinary panel; (3) Three high-fidelity simulations and multidisciplinary debriefings in which AGACNP intensivist students had to communicate simulated “bad news” with family members portrayed by staff members employed by the institution’s simulation laboratory.

Results

Seven participants completed all aspects of the learning activity and replied to the optional survey. The aggregated data collected by the anonymous, electronic survey indicated that the cumulative learning experience strongly contributed to AGACNP intensivist students’ perceived confidence and ability to lead difficult conversations in critical care settings. The experience also assisted AGACNP intensivist students in identifying gaps in their current communication patterns and areas for communication improvement.

Conclusion

This small pilot study indicated that a multidisciplinary approach to simulation-based communication training increases AGACNP Intensivist students’ perceived confidence and ability to lead challenging conversations in ICUs. This educational approach may also assist AGACNP intensivist students in identifying areas where additional interpersonal communication training is needed. Further research should be directed at refining educational methods used in intensivist AGACNP communication training.

RS14 Nonventilator Hospital-Acquired Pneumonia: The Hidden Epidemic

Karen Giuliano, Dian Baker, Barbara Quinn; Sage Products, Cary, IL

Purpose

Nonventilator hospital-acquired pneumonia (NV-HAP) is an underreported and understudied disease. The purpose of our study was to use a large national sample to determine the US incidence of NV-HAP.

Background

US hospitals must monitor ventilator-associated pneumonia (VAP); however, there are no requirements to monitor NV-HAP. The limited studies available support that NV-HAP is an emerging factor in prolonged hospital stays, is associated with significant patient morbidity and mortality, and increases the cost of care. Preventing even 100 cases of NV-HAP may save up to $4 million, 700–900 hospital days, and the lives of 20–30 patients.

Method

We used the 2012 Healthcare Utilization Project (HCUP) National Inpatient Sample (NIS) to determine the number of adult patients in US acute care hospitals who had NV-HAP develop that had not been present on admission. The HCUP NIS is a sampling of inpatient records covering all hospitals participating in the HCUP for a given year (2012). The data available in the HCUP data record cover a range of attributes including hospital attributes, diagnosis and procedure codes, billing information, and basic patient demographics for each unique visit. The full database was mined for patient records in which adult patients had a nonprimary diagnosis of pneumonia.

Results

The total records in the HCUP database for 2012 were 7 296 968, with 6 567 271 records being adults 18 years or older. There were 478 465 records with a least 1 noted pneumonia diagnosis and 284 601 records once pneumonia as a primary diagnosis was excluded. Using these data, the overall incidence rate was 4.3 per 1000 patient days.

Conclusion

The incidence found in our study was similar to findings from previous research. The first step in addressing NV-HAP is to examine the incidence of NV-HAP by using standardized metrics, because incidence rates of NV-HAP are unknown in most hospital systems. NV-HAP should be elevated to the same level of concern, attention, and effort as prevention of VAP in hospitals. Nurses are in a key position for prevention and outcome monitoring to ensure that patients are protected from NV-HAP. Unrestricted grant from Sage Products LLC, Cary, IL.

RS15 Intravenous Smart Pumps: Impact of a Simplified User Interface on Clinical Use

Karen Giuliano; Yale University, New Haven, CT

Purpose

The purpose of this study was to measure the differences in programming times and the frequency of programming use error among 3 intravenous smart pumps. The specific aims of the study were (1) to compare the differences in programming times among 3 intravenous smart pumps on 5 common programming tasks and (2) to compare the differences in the frequency of programming use error among the 3 pumps.

Background

The use of intravenous smart pumps can reduce intravenous medication errors, but data indicate that a high incidence of intravenous medication errors continue to occur. Sources of error include overriding alerts and bypassing the dose error reduction system (DERS). The complexity of the user interface, the time required for pump programming, and incomplete drug libraries are among the most frequently cited reasons. Research suggests that most intravenous medication errors are related to incorrect or incomplete programming.

Method

Fifteen critical care nurses (CCNs) currently working at least 20 hours per week in direct critical care with 2 years of critical care nursing experience and 2 years of experience operating intravenous smart pumps were recruited. CCNs came to a simulation laboratory and received user instruction on 2 unfamiliar intravenous smart pumps: one prototype device (pump C), and one commercially available device (pump A or B). CCNs were asked to complete 5 common intravenous programming tasks: change infusion rate, deliver an antibiotic as a secondary infusion, deliver and titrate a weight-based infusion, and deliver a morphine infusion with a bolus. The programming times and errors were recorded.

Results

The mean time in seconds for all 5 tasks combined was 40.8 (pump A), 40.5 (pump B), and 17.9 (pump C). Using analysis of variance, significant differences were found in all 5 programming tasks between pump A/pump B and pump C, which were also reflected in the effect sizes. As programming times were fastest on pump C, effect sizes were computed by comparing pump C to pump A and pump B. The mean effect size for all 5 programming tasks for pump A was 0.71 (range, 0.39–0.86) and 0.65 (range, 0.5–0.77) for pump B, indicating large effect sizes. There were also differences in overall use error rate (pump A, 7%; pump B, 3%; and pump C, 1%).

Conclusion

These findings indicate that the use of the prototype intravenous smart pump had a positive impact on both programming times and use errors. The longer it takes to program an intravenous smart pump, the more likely it is that frustration and workarounds will contribute to end-user intravenous medication errors. Current technology that can make intravenous smart pumps simpler and easier to use should be integrated into current devices to increase patient safety.

RS16 Does the Time of Taking A Patient Off Intravenous Insulin After Cardiac Surgery Affect Outcomes?

Amy Westbrook, Michele Gobber, Martha McDermott, Daisy Sherry; Edward-Elmhurst Healthcare, Elmhurst, IL

Purpose

To compare patient outcomes between groups before and after implementation of the new Surgical Care Improvement Project (SCIP) guideline for blood glucose management after cardiac surgery. Can a patient be transitioned off of intravenous insulin 24 hours after having heart surgery instead of 48 hours after surgery and have similar outcomes?

Background

SCIP is a national partnership of organizations to improve the safety and reduce surgical mortality and morbidity. The SCIP guidelines changed in January 2014 to require blood glucose to be 180 mg/dL or less between 18 and 24 hours after anesthesia end time. Nurses questioned if administration of intravenous insulin can be stopped earlier than the second day postoperatively.

Method

With approval from the institutional review board, a case-control study design involved retrospective data from 59 patients (July 2013 to December 2013) and 60 prospective cases (July 2014 to December 2014). Blood glucose levels from end anesthesia time to 48 hours postoperatively were coded to evaluate hyperglycemia (> 180 mg/dL) and hypoglycemia (< 70 mg/dL). Outcomes also included hospital length of stay, sternal wound infections, and mortality for the first 30 days of surgery. Chi square, relative risk, odds ratios, and correlations were calculated.

Results

Our study included 119 patients, with 81% of the sample being male and the mean (SD) for age of 67 (12) years; 68% had hypertension, 31% had diabetes, and 14% had heart failure diagnoses. The case and control groups did not differ in terms of events of hyperglycemia or hypoglycemia, length of stay, sternal wound infection, mortality, or having comorbid conditions of hypertension, diabetes mellitus, and congestive heart failure. No relationships were found between study variables including hemoglobin A1c level.

Conclusion

This is the first reported study, to our knowledge, to examine the effects of the new SCIP guidelines. Based on the results of this nurse-led study, it is appropriate to stop administering intravenous insulin 24 hours after anesthesia end time, a change that has been incorporated into our practice. As of July 1, 2014, SCIP-Inf-4: Cardiac Surgery Patients with Controlled Postoperative Blood Glucose has been suspended. Our study was initiated before the suspension of the SCIP guideline.

RS17 Analgesia-Based Sedation: Effects on Ventilator Days and Delirium

Ryan Robisheaux, Kevin Kyle Laurente, Yvonne Salinas; University Health System, San Antonio, TX

Purpose

Benzodiazepines have been used for years as the preferred sedative during mechanical ventilation. Recent studies have challenged this standard, stating that benzodiazepines have caused higher incidences of delirium thereby increasing hospital length of stay and costs. We will examine these claims and the effectiveness of nurse-driven sedation protocols through a review of data gathered within a 22-bed medical intensive care unit (MICU) at an academic Magnet teaching facility.

Background

One of the many challenges when caring for patients receiving mechanical ventilation has been the proper management of sedation. With the use of benzodiazepines, delirium has been noted to be prevalent, and in many cases, reintubations, long-term cognitive impairment, costs, and mortality have increased. This study builds on the Society of Critical Care Medicine’s guidelines for sedation by supplementing research data and offering protocols for nurses to reach target sedation with the proper medication.

Method

Ninety-six chart audits were conducted between July 2014 and April 2015. Confusion Assessment Method for the ICU (CAM-ICU) scores were recorded for each patient, along with incidence and duration of delirium. Ventilator days also were recorded for each patient. Length of delirium, incidence of delirium, and number of days of mechanical ventilation were compared between 2 groups: those treated with benzodiazepine-based sedation and those treated with analgesia-based sedation. Training on the MICU analgesia sedation protocol and CAM-ICU was conducted before initiation of the study through in-service training sessions, handouts, and individual education. The protocol was made readily available within the nursing stations.

Results

After evaluation of the data and results, evidence was conclusive that benzodiazepine-based sedation did not significantly increase the mean duration of delirium (2.14 days vs 2 days); however, it almost doubled (31% vs 18%) the incidence of delirium. With the increase in incidence, treatment costs were increased, as noted by the Society of Critical Care Medicine. An increase of 0.59 ventilator days also was noted when benzodiazepines were used. Chart evaluation noted that the MICU analgesia-based sedation protocol based on the SCCM guidelines was not always properly used by nurses; therefore, these patients were excluded from the data.

Conclusion

With the increased incidence of delirium, vigilant training on CAM-ICU assessments for earlier detection with treatment and the proper use of a unit-based analgesia sedation protocol are indicated. Regarding the increased risks associated with the use of benzodiazepines, it is recommended by this study that these medications be used only when medically necessary and not for routine sedation. These findings further supplement the Society of Critical Care Medicine’s guidelines.

RS18 A Comparison of 3 Intravenous Bolus Medication Systems: A Randomized Cross-Over Simulation Study

Maureen Burger, Dan Degnan; Visante, Inc., Indianapolis, IN

Purpose

To compare preparation time, medication errors, and nursing preferences among 3 systems for administering intravenous medication. Knowledge gained from this study may help inform pharmacy drug-purchasing decisions and improve nursing practices.

Background

Increased workloads, higher patient volumes, and staff shortages have put greater demands on critical care nurses and may lead to at-risk behaviors that compromise patient safety. Information from the Institute for Safe Medication Practices raises questions about the safety of nursing practices in preparing intravenous bolus medications. Commercial products are now available that may reduce time and improve safety.

Method

A randomized cross-over simulation design was used to compare time for drug preparation and the rate of preparation errors between BD Simplist (BDS), Carpuject (CJ), and traditional vial and syringe process (TVSP). Three medication preparation areas were created to mimic clinical practice in the hospital. Twenty-four nurses were assigned to 3 groups and asked to prepare an intravenous dose of diphenhydramine, ketorolac, and morphine using BDS, CJ, or TVSP. Total time for the preparation of each drug was measured. Medication errors were noted. At the start of the study, nurses scored their stress levels for aspects of intravenous bolus medications. At the end of the study, nurses ranked their preferred method.

Results

Mean time in seconds for drug preparation was significantly shorter (P < .001) with BDS (28.7; 95% CI, 23.3–34.2) and CJ (28.3; 95% CI, 23.1–33.5) compared with TSVP (65.8; 95% CI, 57.7–73.9). The time difference between BDS and CJ was not statistically significant. The overall medication error rate for the study was 50.1%. Medication errors were significantly reduced with BDS compared with both CJ and TVSP (1.4% vs 77% vs 53%; P < .001). BDS was ranked by nurses as the preferred method.

Conclusion

When nurses are pressed for time, distracted, or interrupted, patient safety is often compromised. The BD Simplist system for intravenous bolus medications may offer nurses an opportunity to save time, reduce errors, and improve patient safety during medication administration. Nursing preferences as well as the safety profiles of intravenous bolus medication systems should be factored into pharmacy drug-purchasing decisions. Financial support for study provided by BD Rx via Visante.

RS19 Predictors of Inflammatory Complications in Patients Who Received Component Transfusion After Trauma

Allison Jones, Heather Bush, Susan Frazier; University of Alabama at Birmingham, Birmingham, AL

Purpose

To evaluate the relationship between transfusion-related variables and development of inflammatory complications in patients with major blunt trauma.

Background

Transfusion of blood components is associated with increased risk of in-hospital complications and mortality. Patients experience a decrease in physiological reserve and release of both anti-inflammatory and proinflammatory mediators following traumatic injury, predisposing them to development of inflammatory complications (IC).

Method

We performed a secondary analysis of the Inflammation and Host Response to Injury Trauma-Related Database (n = 1656). We included adult patients between the ages of 18 and 65, who received blood component transfusions in the first 24 hours following admission to the emergency department. We evaluated prevalence of IC using frequencies and percentages. Logistic regressions and Cox proportional hazards models were used to determine whether blood transfusion volume and ratio of components transfused in the first 24 hours following hospital admission predicted development of and time to diagnosis of IC, adjusting for age, sex, injury severity, and comorbid conditions.

Results

Patients were mostly white (90%), males (68%), critically injured (mean [SD], New Injury Severity Score, 39 [13]), with a mean age of 39 [14] years. By 24 hours, all received packed red blood cells (PRBC), 65% received fresh frozen plasma, and 40% received platelets. The majority (86%) had at least 1 IC develop. Time to first IC was a median of 5 days (interquartile range, 2–8). Comorbid conditions (odds ratio [OR], 5.4; 95% CI, 2.24–12.89; P < .001) and 24-hour PRBC volume (OR, 1.08; 95% CI, 1.02–1.15; P = .01) predicted IC development. In the cox regression, injury severity (hazard ratio [HR], 1.41, 95% CI 1.03–1.92, P = .03) and 24-hour PRBC volume (HR, 1.01; 95% CI, 1.00–1.02; P = .001) were associated with time to IC development.

Conclusion

Enhanced understanding of the mechanisms that contribute to immune alterations after trauma and blood component transfusion may provide clinicians with the ability to individualize patient management and reduce complications to optimize patient outcomes.

RS20 Developing Critical Care Nurses’ Views of Family Presence During Resuscitation Via Online Learning

Kelly Powers, Lori Candela; University of North Carolina at Charlotte, Charlotte, NC

Purpose

The first aim was to develop an online learning module on family presence during resuscitation (FPDR) derived from a review of the literature and best practices of online education. The second and main purpose of the study was to determine the impact of this online learning module on critical care nurses’ perception and self-confidence for FPDR implementation with adult patients.

Background

Patients and family members support FPDR and view it as their right. Yet, nurses have mixed views, and FPDR is not commonly implemented in bedside care. Only one-third of nurses implement FPDR, and recent research suggests an even lower rate in critical care despite the high incidence of cardiac arrest. Education can improve nurses’ support for FPDR; however, all prior research has used face-to-face learning. The rise of online learning could greatly increase availability of FPDR education.

Method

A 2-group, pretest and posttest quasi-experimental design with random assignment to groups was conducted with a national sample of critical care nurses. The sample was recruited through study advertisements on AACN’s eNewsline and social media sites. An extensive review of the literature was conducted to develop an online learning module on FPDR, which was administered to the intervention group. Perception and self-confidence for FPDR were measured by using the Family Presence Risk-Benefit Scale (FPR-BS) and the Family Presence Self-Confidence Scale (FPS-CS). The 2-factor, mixed-model factorial analysis of variance was used to detect mean differences on the FPR-BS and FPS-CS with P < .05.

Results

A total of 74 critical care nurses participated in the study. The majority had worked in critical care for more than 10 years, and all were experienced in providing cardiac arrest care; yet, only 42% reported any prior education on FPDR. Data analysis revealed statistically significant increases for only the intervention group, with mean FPR-BS score increasing from 3.63 to 4.07 (P < .001) and mean FPS-CS score increasing from 4.24 to 4.57 (P < .001). This demonstrated improved perception and self-confidence following online learning on FPDR. For the control group, the change in mean FPR-BS score was not significant (P = .23) and there was no change in mean FPS-CS score.

Conclusion

Online learning on FPDR is a feasible and effective method for educating large numbers of critical care nurses. Online learning can improve perception and self-confidence, which may promote more widespread implementation of FPDR in practice. Professional organizations should consider the potential of online learning to improve accessibility to FPDR education for all resuscitative care providers. Managers and educators should consider the use of online learning on FPDR to improve critical care nurses’ support.

RS21 Hospital Window View, Light, and Clinical Outcomes Among Cardiovascular Patients

Nancy Albert, Randy Gesie, Esther Bernhofer, Ellen Slifcak, Robert Butler; Cleveland Clinic, Cleveland, OH

Purpose

To determine if light exposure level (measured in lux), and patients’ psychological factors (depression, anxiety, or hostility) and clinical outcomes (emergency response calls, transfer to intensive care unit, hospital length of stay, discharge disposition, perceived health status, and perceptions of pain) differed among cardiac medical and surgical patients who were admitted to private hospital rooms with 1 of 3 window views: nature, building wall, or sky.

Background

Evidence-based design became more prominent in 1999, when the Institute of Medicine published “To Err is Human.” Natural and ambient lighting was previously reported but one method was retrospective, based on southern light vs dimmer light exposure and the other was a cohort study of rooms with windows versus no windows. Our contemporary sample of patients who had large, private rooms, provides new knowledge on associations between window views and psychological and clinical outcomes.

Method

We used a comparative design and a convenience sample of cardiac medical-surgical patients from 1 medical center who were admitted for 2+ days before enrollment. Exclusion criteria included being asleep when approached, history of dementia, confusion, and lethargy (renal or hepatic failure). Outdoor weather conditions were recorded; the Brief Symptom Inventory measured psychological factors, a light meter measured lux, and the Short Form 36 Health Survey (SF-36) global health item measured health status. Other data were assessed via hospital databases and a brief patient survey. Analysis included comparative statistics and multivariate regression models to adjust group comparisons for confounding variables.

Results

Of 463 patients, mean age was 63 (SD, 15) years; 55% were male, and 34% had nature views. Patients with sky or nature views were more likely to have surgical diagnoses (P < .001) and had fewer comorbid conditions (P = .009) than did patients with building views. Light meter readings were highest for nature views (P < .001). Health ratings were higher in patients with nature views (P = .01). After controlling for patient factors that differed by window view, compared with building or sky views, patients with nature views had longer hospital stays (P = .03); were discharged home less often (P = .009), and had higher health status ratings (P = .01); but there were no differences in other outcomes.

Conclusion

Nurses were more likely to place patients with surgical diagnoses in rooms with window views of sky or nature. These patients had longer stays and were more likely to be discharged to a setting other than home, but had higher perceived health status ratings that were not associated with psychological factors. Window view may not be as important as light intensity and general exposure. More research on patient placement based on window view is needed.

RS25 Critical Care Nurses’ Perception of Workload in Responding to Alarms: What’s All That Ringing About?

Robin Krinsky; Mount Sinai Hospital, New York, NY

Purpose

To identify the total workload and workload domains of the task of responding to cardiac monitor alarms and to understand interrelationships between chronic fatigue (CF), acute fatigue (AF), intershift recovery (IR), and the workload of responding to cardiac monitor alarms. Additionally, data was analyzed to see if critical care nurses (CCNs) who report higher levels of fatigue report higher levels of workload related to cardiac monitoring alarms.

Background

Critical decisions call for mental acuity and accurate assessment of workload. Increases in acuity increase workload. To measure the effort in responding to cardiac alarms, the variable of workload needs to be measured. Various dimensions of work need to occur for tasks to be performed. The more difficult the task, the more demanding the work, which can lead to errors and in turn to patient detriment. To date, the workload of responding to cardiac monitoring alarms has not been measured.

Method

A nonprobability convenience sample of 195 CCNs at the National Teaching Institute completed a demographic tool, the Occupational Fatigue Exhaustion Recovery Scale to assess CF, AF, and IR, and the National Aeronautics and Space Administration-Task Load Index to evaluate their subjective workload of responding to cardiac monitoring alarms. A descriptive correlation research study design was employed for this project. A quantitative method was chosen to quantify and understand 6 domains of workload in responding to cardiac monitor alarms. Additionally, the interrelationships between CF, AF, IR, and the total workload and the domains of workload of the task of responding to monitor alarms in CCNs were correlated.

Results

Mental workload mean (WLM) 56.48 (SD, 26.27), physical WLM 44.50 (SD, 29.90), temporal WLM 69.46 (SD, 21.74), performance WLM 38.18 (SD, 24.37), effort WLM 59.26 (SD, 24.52), frustration WLM 62.74 (SD, 27.50), total WLM 55.03 (SD, 16.91), and total WLM 330.18 (SD, 101.49). A positive relationship was found between CF and mental workload (r = .25, P < .05), physical workload (r = .28, P < .01), temporal workload (r = .19, P < .05), performance workload (r = .16, P < .05), effort workload (r = .20, P < .05), workload frustration (r = .26, P < .05), and total workload (r = .35, P < .01). A positive relationship was found between AF and temporal workload (r = .18, P < .01), AF and workload frustration (r = .18, P < .05), and AF and total workload (r = .17, P < .05).

Conclusion

The WLM of responding to cardiac alarms was higher than in other industries, where a red-line safe limit of 50 and higher is associated with reduced performance. Subjective workload was above industry standards. We need to ensure that workload demands do not exceed workload resources. CCNs who reported high CF and AF found the workload to be greater. These high levels of workload need to serve as a wake-up call to the monitoring industry to develop monitoring devices that are less frustrating.

RS27 To Transfer or Not to Transfer to the Pediatric Intensive Care Unit: Is the Use of the Pediatric Early Warning Score Helpful in Pediatric Cancer Patients?

Vicky Ng, Dorothea Dashiell, James Killinger; Memorial Sloan Kettering Cancer Center, New York, NY

Purpose

To determine if the pediatric early warning score (PEWS) tool is applicable in the pediatric cancer population, and if a positive PEWS at the time of rapid response system (RRS) activation would correlate with the need for escalation of care to the pediatric intensive care unit (PICU).

Background

Our RRS is led by PICU pediatric nurse practitioners (PNPs) supported by PICU physicians, and consists of rapid response (RR) calls (response within 5 minutes) and consultations (CS) (response within 30 minutes). Currently the PNPs use their clinical expertise to determine if PICU transfer is warranted. A positive PEWS (PEWS+) has been associated with the need for transfer to the PICU; therefore, we sought to determine if it could aid in determining the need for escalation of care during RRS calls.

Method

This retrospective chart review analyzed 255 RRS calls from June 2014 to June 2015. The following data were extracted and analyzed: call triggers, primary diagnosis, oncology vs stem cell/bone marrow transplant patients, disposition, and repeat calls within 48 hours. Retrospectively, a numeric score was assigned to each call by using the PEWS tool. Scores were derived from the PNP documentation, which contained the vital signs and physical assessment at the time of the call. PEWS+ was defined as a score of 4 or greater or a score of 3 in a single category.

Results

About 56% of calls (n = 144) were PEWS+. PICU admission occurred in 64.6% (n = 93) of PEWS+ calls. Tachycardia was the most common reason for a PEWS+ that did not yield a PICU admission (23.5%; n = 12). Of the negative PEWS (PEWS-), 39.6% (n = 44) required PICU transfer. Hypotension accounted for 30% (n = 13) of PEWS-necessitating a PICU admission. Repeat calls within 48 hours accounted for 16% (n = 41) of total calls. Of the repeat calls, 51.2% (n = 21) were PEWS positive on initial call.

Conclusion

PEWS+ during a RRS call was not a strong predictor for PICU transfer in our patient population. Tachycardia may be too sensitive of a trigger in children with cancer who often have fever, pain, and anemia, which are managed on the inpatient unit. Given the high number of repeat calls for patients who were PEWS+ on the initial call, routine follow-up visits may be indicated in those who screen PEWS+, potentially reducing the number of repeat RRS calls; further studies are warranted to assess this more thoroughly.

RS29 Discrepancies in Measuring Bladder Volumes With Bedside Ultrasound and Bladder Scanning in the Intensive Care Unit

Donna Prentice, Marilyn Schallom, Brian Wessman, Carrie Sona; Barnes-Jewish Hospital, St Louis, MO

Purpose

Patients in intensive care units (ICUs) are at risk for catheter-associated urinary tract infection (CAUTI). Earlier removal of catheters may be possible if accurate measurement of bladder volume after catheter removal can occur. The purpose of this study was to compare measured bladder volumes with a 3-dimensional ultrasound (US), bladder scanner (BS), and urine volume (UVol) in ICU patients with low urine output receiving dialysis or patients with suspected urinary catheter obstruction.

Background

CAUTIs constitute up to 80% of hospital-associated infections, leading to increases in health care cost, mortality, use of antimicrobial agents, and length of stay. The risk of a CAUTI developing increases about 7% per catheter day, leading to infection being the most likely complication of urinary catheters. Removal of unnecessary catheters is a key intervention in prevention of CAUTI.

Method

A physician trained in US and an advanced practice nurse trained in BS measured bladder volume; each was blinded to the other person’s measurement. Device used first (US or BS) alternated each day. Results of each measurement were documented, and the ICU team determined need for intermittent catheterization or treatment for suspected obstruction. Thirteen patients with 52 paired measurements were obtained, reported in milliliters.

Results

US measures were mean volume of 71 (SD, 125.6; range, 1.7–666) mL compared with 114.3 (SD, 130.1; range, 0–529) with BS. Mean difference between US and BS was −43.7 (range, −510 to 598). The correlation between measurements was 0.214 (P = .13). On 8 occasions, UVol measures were obtained. The mean difference between US and UVol was 0.48 (range, −68 to 38.2.) and the mean difference between BS and UVol was 131.5 (range, −72 to 397) for 6 dialysis patients. Two patients with suspected catheter obstructions had the following US, BS, and UVol measurements respectively: (1) 539, 51, > 300 (began voiding around catheter before replaced); (2) 666, 68, 1000 with catheter replacement. Conditions leading to greatest differences were obesity, indwelling catheter, and ascites (known or unknown).

Conclusion

These results demonstrate the inaccuracy of BS. US measures appear more accurate. Based on these findings, BS does not seem to be an acceptable measure of bladder volumes in the critically ill. To remove urinary catheters in patients with minimal to low urine output, serial US measures can be used to monitor bladder volumes and return of renal function.

RS30 Intensive Care Patients Receiving Prolonged Life-Sustaining Treatments

Mary Peterson; Auburn University, Auburn, AL

Purpose

To describe the characteristics of patients who receive prolonged life-sustaining treatments, specifically, mechanical ventilation for more than 30 consecutive days. The demographics, characteristics, outcomes, and financial impact of this population were analyzed. Referrals and use of palliative care services were identified.

Background

From a socioeconomic and ethical perspective, individuals who receive treatment that prolongs life pose a problem for health care systems. The problem affects patients, their families, health care providers, clinicians, and society as issues of law, beneficence, nonmaleficence, justice, and spirituality must be considered. The high medical cost versus the benefit equation involves complex processes: death and dying, ethics, informed decision making, quality of life, and dignity at end of life.

Method

All adult patients discharged during a 3-year period who received life-sustaining treatment (LST) were included in this retrospective, descriptive study of electronic medical records (EMRs). The setting was a single-center, metropolitan, teaching hospital with 5 adult critical care units in the southeastern United States. The aim was to determine the demographics/characteristics of patients that precede LST (ie, mechanical ventilation >30 consecutive days). Trauma versus nontrauma patients were identified. Descriptive statistical analysis was used to synthesize data including measures of central tendency, means, frequency distribution, and bivariate relationships.

Results

Data were analyzed for 53 patients who received prolonged mechanical ventilation who were discharged from 2011 to 2013. The demographics/characteristics include age range 23–67 years, mean 55.2 years; sex male 40 (75%); race African American 28 (53%); insurance Medicare 20 (38%); length of stay range 32–207 days, mean 61; trauma 24 (45%); palliative care consultation 15 patients (28%); status at discharge 24 deaths (45%); and mean charges $626 500 per patient. The 2 cohorts of patients, trauma versus nontrauma, did not differ significantly (P = .05) except with respect to age. The precedent events associated with LST were diverse. Palliative care referrals were late in the course of illness, day 37, of a mean 61-day LOS.

Conclusion

Trends identified from these data suggest a need for earlier intervention (eg, consultation, communication, patient/family conferences) to provide support for families in ethical decision making. Early identification of at-risk patients, the provision of palliative care in conjunction with ongoing treatment, and planning for alternative care for chronic critical illness may support patients and families. Organizational policies relevant to life-prolonging care need review and revision.

RS31 Development of a Nurse Dysphagia Screening Tool for Extubated Patients

Karen Johnson, Lauri Speirs, Anne Mitchell, Timothy Jackson Jr; Banner Health, Phoenix, AZ

Purpose

To validate a dysphagia screening tool (DST) for nurses to use in patients recently extubated after prolonged (> 48 hours) endotracheal intubation (PETI).

Background

Patients who receive PETI are at risk for dysphagia. Half of all patients who receive PETI have dysphagia. Given the high likelihood of aspiration pneumonia developing, swallowing assessments should be conducted on all patients who have received prolonged PETI. There are no known valid and reliable dysphagia screening tools for this population of patients. A team of nurses and speech language pathologists (SLPs) identified dysphagia risk factors and developed a DST for patients with PETI.

Method

After approval was granted by the institutional review board, this study was conducted in 5 adult ICUs within a health system. To validate the tool, we used a prospective, nonexperimental design implemented in 3 phases: (1) content validity was established using Delphi survey techniques with clinical experts. Content validity index (CVI) was calculated for each item and for the entire tool. (2) Interrater reliability was established by agreement with teams of nurses who simultaneously completed the tool. Cohen κ coefficient was used to measure interrater agreement. (3) Accuracy was evaluated by nurses and SLPs who blindly completed the tool on 75 eligible patients. Sensitivity and specificity were calculated from a 2 × 2 contingency table.

Results

Content validity was achieved by using 16 critical care expert clinicians in a 2-round Delphi survey. Individual item scores were 0.82 to 1.0 with an overall CVI of 0.928. Interrater reliability was established (Cohen κ = 1.0). Sensitivity was 84% and specificity was 62%. The prevalence of dysphagia was 57%.

Conclusion

We established validity and reliability of a DST for nurses to use to screen for dysphagia in patients recently extubated after PETI. The DST can help nurses to determine the patient’s ability to swallow after PETI in a standardized reliable and valid manner. Additionally, patients may be able to take oral intake sooner after extubation as they may not have to wait for an SLP to evaluate them for dysphagia.

RS32 Survival in Cancer Patients Receiving Long-Term Mechanical Ventilation and Maximal Medical Care

Kelly Haviland, Kay See Tan, Robert Downey, Nadja Schwenk; Memorial Sloan Kettering Cancer Center, New York, NY

Purpose

Cancer patients can have respiratory failure due to their cancer or as a result of cancer treatment. Chronic respiratory insufficiency can result and lead to prolonged need for mechanical ventilation. We retrospectively reviewed cancer patients requiring long-term mechanical ventilation in order to define the likelihood of weaning from ventilatory support and of long-term survival.

Background

Likelihood of weaning from ventilatory support and long-term survival in cancer patients is unknown. Therefore, only incomplete information exists to direct therapeutic goals of care discussions among practitioners, patients, and their families. The surgical advanced care unit (SACU) at Memorial Hospital was established in January 2010 as a nurse practitioner–led team to provide uniform delivery of care to adult cancer patients receiving mechanical ventilation outside of a critical care setting.

Method

A retrospective review of Memorial Hospital patients from 2008 to 2012 who required mechanical ventilation outside of a critical care setting was approved by the institutional review board. Collected data included patient demographic and clinical characteristics, treatments, and outcomes. Overall survival was determined by using the Kaplan-Meier approach. Time to weaning is analyzed by using the cumulative incidence function approach in which death is considered a competing risk. We investigate potential prognostic factors to include in the prospective evaluation of the effectiveness of the planned weaning protocol.

Results

Two-hundred patients had long-term mechanical ventilation; 122 had weaning as a goal of care, and 78 were palliative care. Of patients with a goal to be weaned, 62 were weaned. The cumulative probability of patients being weaned by length of time since ICU discharge was 35% by 14 days, 41% by 21 days, 46% by 30 days, and 56% by 60 days. Among those weaned, median total duration of mechanical ventilation was 27.5 days and for those not weaned, 46 days. Median overall survival was 0.35 (95% CI, 0.25–0.59) years and overall survival 2 years after the study period was 19%. Two-year survival of patients not weaned was 7% (95% CI, 2% −17%) and for patients who were weaned 31% (95% CI, 19%–44%).

Conclusion

These data suggest that the goal to wean cancer patients from long-term mechanical ventilatory support can be achieved even after prolonged periods of support, but even for patients who are weaned, the likelihood of long-term overall survival is poor. The development of an algorithm to determine likely prognosis is needed to allow nurse practitioners to assist patients and their families in determining goals of care.

RS34 Smallest Discard Blood Volume From Arterial Catheter That Does Not Influence Laboratory Results

Mijin Noh; Asan Medical Center, Seoul, Korea

Purpose

To evaluate the minimum amount of the discard blood volume using the arterial catheter that does not affect results of arterial blood gas analysis, electrolyte levels, or coagulation tests.

Background

Patients admitted to the intensive care unit are at risk of blood loss because of the many blood tests required. In patients with an arterial catheter, diluted heparin solution is continuously infused in order to prevent the blood from clotting, and a certain amount of blood is discarded before blood samples are collected so that the heparin solution does not affect the laboratory results. The goal of this study is to find the smallest discard volume possible and to provide guidance for collecting blood samples from the arterial catheter.

Method

A cross-sectional study of 48 patients in the adult intensive care units of hospitals in Seoul enrolled in this study. Patients with disseminated intravascular coagulation and patients using anticoagulants were excluded. Approved by the institutional review board, the agreement was signed by the patients before the study. From the arterial catheter, 3 mL of blood was discarded from the “Discard volume, 3 mL” group, 5 mL was discarded from the “Discard volume, 5 mL” group, and 7 mL was discarded from the “Discard volume, 7 mL” group before the 2-mL blood sample was collected. The collected data were statistically processed by using SPSS. Analysis of variance was used to compare the test results from the blood samples with the different discard volumes.

Results

Of the 48 patients, 1 patient was excluded from the study due to lack of an electrolyte test. The study period was from June 1, 2015 to June 15, 2015. Results of arterial blood gas analysis did not differ among the 3 groups: pH (P = .85), Paco2 (P > .99), Pao2 (P = .99), bicarbonate level (P = .99), arterial oxygen saturation (P = .94). Second the electrolyte results did not differ among the 3 groups: sodium (P = .97) and potassium (P = .96). Finally the coagulation tests also showed no difference: prothrombin time (P = .96) and partial thromboplastin time (P = .75).

Conclusion

In this study of the arterial blood gas analysis, electrolyte levels, and the coagulation tests, measurements did not differ among the 3-mL, 5-mL, and 7-mL discard volume groups. These results suggest a minimal 3-mL discard volume from the arterial catheter be used for arterial blood gas analysis, measurement of electrolyte levels, and the coagulation tests. We propose further study of minimal discard volumes from the arterial catheter when collecting blood samples for complete blood cell counts and chemical panels.