Research Oral Poster Presentation Award Winners

RS58 What Patients Want: Assessing an Educational Intervention to Improve Completion of Advance Directives

Katherine Hindere; Salisbury University Department of Nursing, Salisbury, MD

Purpose

Critical care nurses (CCNs) often care for patients at or near the end of life (EOL) who do not have an advance directive (AD) and have not ever discussed their EOL wishes. The purpose of this study was to develop, implement, and explore whether a community-based educational workshop on advance care planning (ACP) for adults would increase AD completion. This study aimed to increase education and to facilitate patient and family ACP conversations before the onset of life-threatening critical illness.

Background/significance

An AD is a tool used to express preferences for EOL care in situations that critical care patients often face, when they cannot communicate or make decisions for themselves. Estimates of AD completion range from 25% to 54%. Lack of education has been cited as a major reason for AD noncompletion. Most studies on AD completion involve patients in acute and long-term care.

Method

A cross-sectional exploratory posttest-only design was used. Institutional review board approval was obtained. A convenience sample of community-dwelling adults was recruited by using the media (online, newspaper, radio), word-of-mouth, and flyers. The workshop was designed using the Five Wishes program. Subjects attended a 1.25-hour multidisciplinary workshop on ADs. After the workshop, participants were invited to complete surveys. Data were collected anonymously by using the Advance Directive Attitude Survey (ADAS), an AD completion survey, and a demographic form. Descriptive and inferential statistical analyses including correlation, χ2, and logistic regression were completed by using SPSS 19.

Results

The workshop was attended by 81 persons with an 86.4% (n=70) response rate. Participants’ ages ranged from the 20s to the 80s. Most were female (68.6%, n=48), white (90%, n=63), and college educated (88.6%, n=62). Thirty-four percent (n=24) of subjects had an AD, 70% (n=49) had spoken about an AD previously. Likelihood of completing an AD after the workshop was 88.6% (n=62) and 75.7% (n=53) were very likely to speak to their family about ADs. Men and women differed regarding having spoken about an AD (χ2=4.256, P=.04). The workshop increased understanding of ADs in 91.4% (n=64). Logistic regression revealed that increased age was a significant predictor of likelihood to complete an AD (P=.002).

RS46 Reevaluation of the Critical Care Pain Observation Tool in Intubated Adults After Cardiac Surgery

Sandra Linde, Jason Machan, Jennifer Beaudry, Ruth Roy, Nancy Opaluch Bushy, Kristen Martin, Amy Brucker, James Badger; Rhode Island Hospital, Providence, RI

Purpose

To reevaluate the validity and reliability of the Critical Care Pain Observation Tool (CPOT) for assessing pain in intubated adults recovering from cardiac surgery.

Background/significance

Pain assessment in critically ill patients who are intubated and sedated remains a challenge to health care providers. No universally accepted pain assessment tool is used across intensive care unit settings.

Method

Prospective, repeated-measures design. A convenience sample of 35 postsurgical patients were recruited in a 5-month period. Thirty of these patients were prospectively scored by using the CPOT immediately before and during procedures considered painful (turning) and nonpainful (central catheter dressing change). Scoring was conducted by 2 trained nurses out of a pool of 6 nurses participating in the study. Concurrent validation of CPOT scores was done by comparing CPOT scores computed during the painful and nonpainful procedures. Interrater reliability was analyzed by comparing CPOT scores of different raters, done independently from one another.

Results

The validity of the CPOT for estimating relative amounts of pain was based on a generalized estimating equation modeling the changes in CPOT scores before and during turning and dressing change procedures. Raters’ mean CPOT scores did not increase significantly during central catheter dressing changes, but did increase during turning. The degree to which mean CPOT scores increased was significantly greater (+2.80; 95% CI, 1.84–3.75; P<.001) during turning than during central catheter dressing change. Interrater reliability was estimated by using Fleiss-Cohen weighted κ coefficient. Reliability was high at 0.87 (95% CI, 0.79–0.94), showing consistency between raters.

RS52 The Impact of Palliative Education on Moral Distress in Intensive Care Unit Nurses

Catherina Madani, Cassia Chevillon; University of California, San Diego, and University of San Diego, CA

Purpose

To determine if (1) attending palliative care education is associated with a decreased incidence of moral distress as measured on the Moral Distress Scale (MDS-R) and (2) providing palliative care education to nursing staff increases the incidence of palliative care consultation with the unit.

Background/significance

Up to 1 in 5 Americans die in intensive care units (ICUs). Caring for critically ill patients can lead to physical and emotional exhaustion for nurses, especially in an ICU setting. Moral distress is a phenomenon that occurs when nurses, (1) “know the ethically appropriate action to take, but are unable to act upon it,” and (2) “act in a manner contrary to personal and professional values, which undermines integrity and authenticity.” Palliative care education and earlier involvement in ICU care may help to ameliorate the helplessness that is often experienced by nurses caring for patients at the end of life.

Method

A single-center, prospective, repeated-measure survey design to evaluate the effectiveness of palliative education on scoring on a moral distress scale. Three 4-hour classes, in alignment with the priorities of the End-of-Life Nursing Education Consortium were created by the ICU’s interdisciplinary palliative care committee and offered for 2 weeks. Nurses were recruited before each class. Informed consent was obtained and the MDS-R and a demographic survey were distributed and completed by participants before the class began. The MDS-R was redistributed 1 month and 6 months after the class.

Results

There was a significant difference between participants’ moral distress as measured by MDS-R scores before (mean, 123; SD, 47) and after (mean, 86; SD, 47) the palliative care education, (t18=3.81, P=.001), with participants reporting lower levels of moral distress after attending the palliative education, with 44.7% of the variance explained (partial χ2 = 0.447). The improvement in moral distress scores was sustained at 6 months (mean, 86; SD, 32; t16 = 3.6, P=.003).

RS40 Patients Readmitted to a Pediatric Intensive Care Unit: A Retrospective Review of Risk Factors

Shauna Skog, Alex Gerber, Kimberly Statler Bennett, Jared Henricksen, Mary Jo Grant; Primary Children’s Medical Center, Salt Lake City, UT

Purpose

Predicting which patients are at risk for readmission to a pediatric intensive care unit (PICU) will help clinicians target assessments and interventions. Early identification of patients with evolving critical illness may lead to targeted response strategies to mitigate clinical deterioration. The purpose of this study is to evaluate the causes and risk factors associated with unexpected readmission to a medical/surgical/cardiovascular PICU by using a bedside pediatric early warning system (PEWS) score.

Background/significance

Predictors of PICU readmission include unstable vital signs at the time of ICU discharge. The 7-item PEWS score with a range of 0 to 28 has good sensitivity for detecting clinical deterioration that results in unplanned transfer to the PICU. A bedside PEWS score of 8 has been associated with readmission. Patient readmitted to ICUs have higher mortality and length of stay.

Method

Analysis of a prospective patient cohort of consecutive admissions in a 27-month period from January 1, 2010, to March 31, 2012, in a 44-bed university-affiliated multidisciplinary medical/surgical/cardiac PICU. Readmissions were defined as an unplanned return to the PICU within 48 hours of transfer. PEWS scores on PICU admission, transfer, and readmission were calculated. Readmissions were classified by diagnosis and reason for admission. Procedural readmissions were excluded from analysis.

Results

A total of 4773 patients were admitted and 104 readmitted, for a 2% readmission rate. Patients were readmitted for progression of disease (77%), discomfort in the general care area (13%), and because of operative intervention (5%), inappropriate transfers (3%), or a new event (2%). No patients died, although 8 patients (8%) had a code team called. Mean length of stay after readmission was 3.4 days. An increase in PEWS from admission to readmission was seen in 44% of patients. About 35% of patients were readmitted for respiratory distress. Mean PEWS score was 9.4 (range, 0–20) at PICU admission, 5.7 (range, 0–22) at time of transfer, and 8.9 (range, 0–19) at readmission. Median time in the PICU was 16 hours (range, 1–47 hours).

Research Posters

RS1 Open vs Closed Endotracheal Suctioning in Patients With Acute Respiratory Failure Undergoing Mechanical Ventilation

Martha Reeves, Glenda Harling, Arthur Wheeler, Todd Rice, Julie Foss, Suzanne Hyde, Kristen Majeske, Terry Ring, Enqu Kent, Aven McNab; Vanderbilt Medical Center, Nashville, TN

Purpose

To demonstrate that the open method of suctioning could yield more secretions from an artificial airway than would closed suctioning in patients receiving mechanical ventilation.

Background/significance

The best method of endotracheal suctioning is unknown. Retained airway secretions predispose to infection, atelectasis, hypoxemia, and airway occlusion. Closed suctioning is easier, resulting in increased compliance, decreased exposure, and fewer complications. Ventilator-associated pneumonia (VAP) is the second most common nosocomial infection in the United States, responsible for 90% of infections in patients receiving mechanical ventilation. Closed suctioning may decrease risk of VAP by decreasing exogenous or environmental contaminants.

Method

We conducted an interdisciplinary, nurse-led, randomized, open-label, crossover study to compare the effectiveness of secretion removal with 2 different methods of suctioning in patients expected to receive mechanical ventilation for at least 96 hours. Open suctioning was performed by using a red rubber catheter and sterile technique after the patient was disconnected from the ventilator. Closed suctioning was performed by using an in-line ballard, maintaining the connection to the ventilator. The initial method of suctioning was assigned by blinded envelope randomization. Each patient was suctioned every 4 hours and as needed per randomization. After 48 hours, patients crossed over to the other suctioning method.

Results

The primary end point was a paired analysis of total weight and volume of sputum suctioned in 48 hours. A total of 38 patients were enrolled; 20 (53%) were female, 37 patients were orally intubated, and 1 had a tracheostomy. Scheduled open suctioning events totaled 260 compared with 279 closed. Suctioning as needed was required in 24 patients, totaling 99 events: 40 open suctioning and 59 closed suctioning. Mean (SD) total weight of sputum from open suctioning was 79.5 (45.4) g compared with 78.9 (41.8) g with closed suctioning (P=.93). Mean (SD) total volume was 87.2 (48.7) mL with open suctioning versus 86.4 (44.2) mL with closed suctioning.

RS2 A Comparison of Head Elevation Protocols After Removal of Femoral Sheath Used for Coronary Angiography

Nancy Olson; Sarasota Memorial Hospital, Sarasota, FL

Purpose

To compare 2 standard protocols for head-of-bed (HOB) elevation after angiography. The first protocol (location 1) involved flat bed rest for 3 hours, with no HOB elevation. The second protocol (location 2) involved bed rest for 3 hours, with the HOB elevated to 30° after 1 hour and to 70° after 2 hours. The study compared bleeding complications, reported levels of back pain, and patient satisfaction scores in the area of pain management.

Background/significance

Immobilization of the affected leg after femoral sheath removal has long been considered instrumental in preventing disruption of a newly formed clot by flexion of the groin musculature. Furthermore, supine positioning with the HOB elevated less than 30° has been commonly practiced, despite apparent limited research to validate it. Prolonged immobilization and recumbent positioning contribute to back pain because of the decrease in muscle activity. If the practice of flat positioning is not evidence-based in the maintenance of hemostasis and contributes to postprocedural back pain, it is worthwhile to explore an alternative to this practice.

Method

This study used a prospective comparative design with a sample size of 80. Eligible subjects gave consent before the procedure and were given the postprocedure care standard to their facility’s protocol. Hourly pain assessment was completed by means of a numeric rating scale. Assessment for complications was performed throughout recovery, at discharge, and upon follow-up. Patient satisfaction assessment was conducted by follow-up call or visit after 24 hours.

Results

No bleeding complications, including bleeding from the access site and hematoma, were reported in any of the 80 participants. Pain scores were compared via the Numeric Rating Scale (NRS). The score was assessed at regular intervals during recovery and at discharge. Significant differences were found between group mean NRS scores at 2 hours and 3 hours after sheath removal, with significantly higher mean NRS scores for the location 1 protocol (3 hours flat) than the location 2 protocol. Patient satisfaction related to pain management was rated on a scale of 1 to 5, and the range reported was 4 to 5. No significant difference was found in patient satisfaction ratings between groups.

RS3 A Parallel Trial of Quiet Time for Patients in Critical Care

Carolyn Maidl, Annette Garcia, Jane Leske; Froedtert Hospital, Milwaukee, WI

Purpose

The primary aim is to examine a quiet time in critical care. Specific research questions are: What are patients’ perceptions of sleep, pain, and anxiety? What are nurses’ perceptions of patients’ sleep? What are nurses’ satisfactions with “quiet time?” What changes in perceptions of sleep, pain, and anxiety occur during a stay in critical care? What changes in patients’ mean arterial pressure occur during a stay in critical care? This study is based on Topf’s Environmental Stress Model.

Background/significance

An adequate amount of sleep is necessary to maintain body system functions. Critical care environmental factors such as noise, lighting, and frequent care interactions often disrupt sleep. Sleep loss is associated with an increase in patients’ falls, increases in use of medications and restraints, and changes in patients’ mental status. Numerous studies have been conducted on sleep deprivation in critical care; however, few studies have examined if a non-pharmacological “quiet time” (QT) is beneficial in critical care.

Method

A dual-unit, nonrandomized, uncontrolled parallel trial of a QT protocol was completed following recommendations from prior studies. Protocol education was provided to nurses, physicians, and ancillary departments before implementation. Written, informed consent was obtained from patients. Designated QT hours were between 1400 and 1600. Environmental stressors were reduced and comfort and sleep promoted through repositioning and pain relief before QT. Patients’ perception of sleep was obtained by using the modified version of the Richards-Campbell Sleep Questionnaire (RCSQ). Patients’ perception of pain and anxiety and nurses’ perception of patients’ sleep were obtained with a single-item indicator. Blood pressure measurements were obtained from electronic medical records immediately before and after QT.

Results

A total of 108 patients participated in 180 QTs. A 1-way, repeated measure analysis of variance was calculated to compare RCSQ scores of each participant over 3 consecutive QTs. No significant effect was found (F2,30 =2.75, P = .08). Patients rated their sleep higher at each QT. A 1-way, repeated-measure analysis of variance also was calculated to compare pain and anxiety scores of each participant. No significant effect was found for pain (F2,30 =0.09, P=.90) or anxiety (F2,30 =0.97). Anxiety levels were reduced over consecutive QTs. Patients and nurses were satisfied with QT. No significant changes were found for mean arterial pressure (F2,30 =1.04, P=.07).

RS4 A Stress Relief Initiative for Pediatric Float Nurses

Susan Romero, Angela Baldonado; Texas Children’s Hospital, Houston, TX

Purpose

The goal of the stress reduction initiative is to provide an outlet for a group of pediatric float nurses to better manage their work-related stress through providing stress-relieving activities with an emphasis on using journaling and reflective practice.

Background/significance

In the nursing profession, feelings of stress and burnout are commonly reported. Many studies involving health care providers have found, especially among nurses, that they are a professional group with a high risk of burn-out. Within the float pool of Texas Children’s Hospital, nurses report being stressed on a daily basis from the uncertainty of where they will be assigned each day, not having a “home” unit with a consistent work group, working with critically ill patients, and the absence of a sounding board for them to vent frustrations and concerns.

Method

In month 1, participants attended an introductory meeting outlining guidelines and the plan for the stress-relief program and completed the UWES-17 questionnaire, sources of stress and effects of stress on the body were introduced, the concept of reflective practice and its 7 benefits (health and wellness, awareness, connection, focus, creativity, authenticity, and vision and dreams) were introduced, and each participant was given a journal with specific questions related to work experience that facilitated guided reflective practice. In month 2, the concept of creating mandalas as an outlet to relieve stress, express one’s inner self, and allow the mind to rest was introduced; participants were given an opportunity to discuss meanings, interpretations, and feelings the mandalas brought forth and to discuss journal entries with members of the group. In month 3, the concept of the “relaxation response” was introduced, a 10-minute exercise was conducted with participants to elicit a “relaxation response,” participants were given the opportunity to share their experience during the relaxation exercise and to discuss journal entries with members of the group. Data Gathering: Participants were asked to complete the UWES-17 questionnaire at the beginning and the end of the 3-month project, and group participants were interviewed to determine their perspectives on their stress level at the beginning and the end of the program. Analysis: UWES-17 questionnaires completed before and after the intervention were reviewed and scores were calculated and interpreted on the basis of the UWES manual.

Results

The UWES tool is categorized into 4 parts. Vigor refers to high levels of energy and resilience, the willingness to invest effort, not being easily fatigued, and persistence in the face of difficulties. Dedication refers to deriving a sense of significance of one’s work, feeling enthusiastic and proud about one’s job, and feeling inspired and challenged by it. Absorption refers to being totally and happily immersed in one’s work and having difficulty detaching one’s self from it so that time passes quickly and one forgets everything else that is around. The final category is the total score of the questionaire. The posttest results showed significant improvement in all 4 categories.

RS5 A Survey of Oral Care Practices for Intensive Care Patients Receiving Mechanical Ventilation

Andreanne Tanguay; Université de Sherbrooke, Sherbrooke, Quebec

Purpose

(1) To describe actual oral care practices provided by critical care nurses for critically ill patients receiving mechanical ventilation and (2) to understand, referring to the Theory of Planned Behavior, the factors influencing this behavior (intention, attitudes, subjective norms, perceived behavioral control and beliefs).

Background/significance

Despite strong scientific evidence on the role of oral care in the prevention of systemic infections such as ventilator-associated pneumonia (VAP), oral care practices remain an inconstant professional behavior in intensive care units.

Method

A descriptive cross-sectional and correlational study design was used. A mail-in self-administered survey was conducted to collect data. A convenience sample was obtained from an available population of 975 subjects using a provincial critical care nurses’ database. A 69-item instrument was developed and the psychometric properties of the instrument were analyzed for content validity, internal consistency and stability.

Results

This study reports indicators describing oral care practices and documents factors influencing professional behavior among critical care nurses in Quebec. Replies were received from 375 nurses (response rate 38.9%). Eighty-seven percent of respondents reported providing oral care mainly using foam swabs (98%) and water (88%). Only 4% of respondents reported using a pediatric toothbrush. Chlorhexidine was used by 24% of respondents. Frequency of oral care provided varied from every 2 to every 8 hours. Attitude, perceived behavioral control, ressources, knowledge, and oral care education appeared to be factors affecting quality of oral care and intention to provide oral care (R2=0.32%).

RS6 AACN Practice Alert: Verification of Feeding Tube Placement—Practice Implementation by Critical Care Nurses

Annette Bourgault, Elizabeth NeSmith, E. Janie Heath, Jennifer Waller, Vallire Hooper, Mary Lou Sole; Georgia Health Sciences University, Augusta, GA

Purpose

AACN practice alerts (PAs) are evidence-based guidelines intended to promote safe and excellent practice. Although PAs have been available since 2004, no studies have looked at PA adoption and related practice implementation in the clinical setting. The goal of this study was to learn more about factors influencing PA adoption and adoption/implementation of the clinical practices recommended by this guideline.

Background/significance

Little is known about factors influencing adoption of guidelines by critical care nurses. Implementation of practices recommended by critical care guidelines has been variable (27%–82%). Inconsistent practice related to feeding tube verification has led to serious complications, including death. A better understanding of this high-risk practice may lead to improved outcomes for patients. Knowledge about PA adoption will also be helpful to AACN for future guideline development.

Method

This descriptive, exploratory study examined factors influencing adoption of the 4 main clinical practices recommended by AACN’s Verification of Feeding Tube Placement PA. An 86-item questionnaire, guided by Rogers’ Diffusion of Innovations conceptual framework, included demographics and validated measures. An invitation to participate in the national, online survey was included in Critical Care Newsline. The survey was completed by 370 critical care nurses. Logistic regression was used to analyze the dependent variable of adoption. Descriptive statistics were used to report demographics and clinical practice implementation. The α level was set at .05 for all statistical analyses.

Results

Only 55% of nurses were aware of the PA. Of the nurses who adopted the PA, only 24% had implemented all 4 recommended practices. Practice implementation was variable (23%–94%); practices were performed only some of the time (10%–73%). Implementation (23%–94%) was lower than practice awareness (60%–98%). Forty percent of nurses were unaware that auscultation was not evidence-based. Practice predictors included staff nurse/charge nurse role, traditional communication behavior, academic medical center, research/web-based communication behavior, and policy. Policy was the only significant predictor of all 4 practices. PA adoption was also a predictor for 2 clinical practices.

RS7 Advancing the Art and Science of Critical Care Nursing: Quality Indicators Evaluate an International Pediatric Intensive Care Unit Mentor Program

Brienne Johnson, Meri Clare, Rachel White, Jennifer Porkka, Maureen Hillier; Boston Children’s Hospital, Boston, MA

Purpose

(1) To obtain a baseline assessment of the quality of nursing care provided in Cambodia’s National Pediatric Hospital (NPH) Pediatric Intensive Care Unit (PICU), (2) to assess the effectiveness of educational interventions implemented by the Sister PICU Project of the Boston Children’s Hospital Medical-Surgical Intensive Care Unit (MSICU) at the NPH PICU, (3) to aid in the development of future Sister PICU Project goals and objectives, and (4) to help evaluate the sustainability of the Sister PICU Project.

Background/significance

National Pediatric Hospital is the only state-run pediatric hospital in Cambodia. The 12-bed PICU opened in 2005, but the quality of care remains markedly poor with high mortality rates. Inadequate nurse training is a major contributing factor. In 2009, nurses from the MSICU at Boston Children’s Hospital (BCH) launched the “Sister PICU Project” in collaboration with the NPH PICU at the suggestion of the World Federation of Critical Care Nurses. Six biannual educational trips have been completed.

Method

BCH teams collected baseline data about NPH PICU nursing practice on visits in 2009 and 2010. A tool was then developed to audit 12 to 15 quality indicators targeting standards of nursing care known to reduce mortality rates. During trips from 2011 to 2012, BCH teams completed the auditing tool daily by observing patients and reviewing charts, yielding audits of 97 patients. Five indicators are highlighted here: bedside emergency equipment; continuous cardiorespiratory monitoring; appropriate alarm limits; frequency of measuring vital signs (VS); and having intubated patients’ head of bed elevated 30°. The mean number of patients meeting these quality indicators was analyzed over time.

Results

Baseline data from the first 2 BCH visits to NPH found: Less than 10% of patients had continuous cardiorespiratory monitoring (CCRM); 0% had alarms turned on and/or set appropriately; 0% of patients had bedside emergency equipment; 0% had VS documented every 4 hours; 0% of intubated patients had the head of the bed elevated. Results from 2011 to 2012 showed that documentation of VS every 4 hours is still poor (mean, 6%). However, 100% of patients audited were on CCRM and 72% had appropriate alarm limits set. Additionally, 71% of patients had standard emergency equipment at the bedside and 54% of intubated patients had the head of their bed elevated to 30°.

RS8 Ambulating Patients With Pulmonary Artery Catheters Who Are Awaiting Heart Transplant

Mary Harris, Marjorie Funk, Janet Parkosewich, Prasama Sangkachand; Yale University School of Nursing, New Haven, CT

Purpose

To describe patients’ physiologic and emotional responses to ambulating with a pulmonary artery (PA) catheter while awaiting heart transplant. The specific aims were to determine (1) If there were changes in PA catheter position while ambulating, (2) If ambulation is associated with patients’ feeling of fatigue and their exercise tolerance, and (3) patients’ perception of how ambulation affects their sense of well-being.

Background/significance

Patients awaiting heart transplant often have a PA catheter in place to monitor their hemodynamic response to medical therapy. Traditional care of critically ill patients with PA catheters dictates maintenance of bed rest, although no evidence exists to support this practice in patients with PA catheters who are hemodynamically stable. It is important for nurses to help stable patients awaiting transplant maintain their optimal physical and emotional condition, while always ensuring their safety.

Method

The sample for our prospective descriptive study contained 8 patients in our cardiac intensive care unit who had a PA catheter, were awaiting heart transplant, and provided informed consent. We obtained data each time a patient ambulated (N = 155 walks). For each walk we assessed for changes in: PA catheter waveform, cardiac rhythm, and PA catheter position (as evidenced by a change in the marking on the catheter visible at the insertion site). We measured perceived level of exertion (Borg Scale) and fatigue (5-point Likert scale) before and after ambulating. We documented the distance walked and vital signs. We assessed patients’ perception of how ambulation affected their sense of well-being weekly.

Results

The mean age of patients was 53.9 years (range, 34–65 years), and 7 of the 8 (87.5%) were male. The mean number of walks was 19.3 (range, 1–72). There was no evidence of catheter migration or catheter-induced arrhythmias, nor reports of excessive feelings of exertion or fatigue. Patients expressed appreciation for the opportunity to increase their activity and walk, as well as feelings of improved physical well-being. Four of the patients subsequently underwent successful heart transplants.

RS9 Bacterial Colonization of Manual Resuscitation Bags

Niki Rasnake, Robert Heidel, Lisa Haddad, Mark Rasnake; University of Tennessee Medical Center, Knoxville, TN

Purpose

To evaluate the degree of bacterial colonization on resuscitation bags to establish at what point the bag should be replaced in relation to bacterial growth, decreasing the patient’s risk of developing ventilator-associated pneumonia (VAP). Determine if location and appearance of the bags have a correlation with degree of bacterial growth, as well as how long it took for the bags to become colonized. Our current standard for resuscitation bag care is to discard when “visibly soiled.”

Background/significance

Colonization of resuscitation bags is a potential source for nosocomial infection, in particular VAP. Current standards for care, including standards from the Centers for Disease Control and Prevention, state to discard resuscitation bags when “visibly soiled.” We wanted to determine if visible soiling was a reliable indicator of bacterial colonization. Past research led to best practices recommendations for VAP prevention, one of which included a dedicated storage place for the bags, which was an additional focus of our study.

Method

We conducted a prospective study that measured quantitative aerobic bacterial colonization of swabs obtained from the inner hub of the connector site of the bags. After the intubation, daily culture samples were taken and collected for up to 6 days. Demographic data of the patient, location of the bags and their appearance (clean vs soiled) was noted for each sample collection. The study was conducted in the surgical critical care (31 beds) and medical critical care (20 beds) units at a level-1 trauma center in the Southeast and involved a sample size of 147 resuscitation bags from December 2011 to April 2012.

Results

We analyzed bags from a total of 147 participants with a mean age of 54.7 years. Patients were intubated for a mean of 2.6 days. A significant difference was noted in total positive cultures between the first and second day cultures and the fifth and sixth day cultures, P=.003 (Mann-Whitney test). The 1- to 2-day group had a 7.8% rate of positive cultures, 3- to 4-day group had a 12.5% rate of positive cultures, and the 5- to 6-day group had a 26.5% rate of positive cultures. Each day of study, more than 92% of the bags that were located on the wall had no positive growth. Of those with positive cultures, 100% of the time the bag was marked as “clean” in appearance.

RS10 Basinless Baths Compared With Chlorhexidine Bathing to Reduce Hospital-Acquired Infections

Mary Beth Makic; University of Colorado Hospital, Aurora, CO

Purpose

Compare the effects of basinless bathing using foam no-rinse body cleanser and chlorhexidine (CHG) wipes in reducing hospital-acquired infections (HAIs), specifically bloodstream infections associated with central catheters (CLABSI) and catheter-associated urinary tract infections (CAUTIs) in critically ill adults. Specific aims were to evaluate (1) the impact of standardized bathing with a no-rinse cleanser on CLABSI/CAUTI rates; (2) the effectiveness of CHG bathing compared with standardized bathing on the reduction of CLABSI/CAUTI; and (3) potential adverse skin effects with CHG bathing.

Background/significance

Preventing CLABSI/CAUTIs remains a hospital and nursing priority. Beyond harming patients, these infections add significant cost to hospitalization. Research has shown that bath basins harbor organisms that may cause infections and that bathing with CHG-impregnated wipes may reduce the risk of HAIs and spread of multidrug-resistant organisms. No studies were found examining the impact of good bathing practices without a bath basin on patients’ infections.

Method

The study was approved by the institutional review board and conducted in an academic medical center in 4 adult ICUs. The average daily census was 48. The study had 3 phases, each 3 months in duration. Phase 1: baseline, measured HAI rates without changing bathing practices. Phase 2: all patients were bathed using a no-rinse agent without a basin. Phase 3: patients were randomized to CHG wipe or basinless bath. Most (96%) of the critical care nurses and ancillary staff were educated on basinless and CHG wipe bathing protocols. CLABSI/CAUTI rates were tracked by an infection prevention specialist. Rates were defined as infections occurring during the ICU hospitalization and were associated with a device.

Results

A mean of 4320 patient bathing episodes were observed in each 3-month phase; mean central catheter days=2372 and indwelling urinary catheter days=2994. Both CLABSI and CAUTI rates decreased when bathing practices were standardized to basinless bath. HAI rates were further decreased when patients were bathed with CHG. CAUTI rates decreased from 1.43 to 0.97 (odds ratio, 0.63) with basinless bathing; when CHG was added, CAUTI rates were 0 (odds ratio, <0.45). CLABSI rates were reduced with basinless bathing; rates decreased from 1.59 to 1.11 (odds ratio, 0.70). Less impact was see with CHG bathing (rate, 2.06; odds ratio, 0.74). No adverse skin reactions were reported in the study. No reduction in multidrug-resistant organisms was observed.

RS11 Can Intracranial Pressure Fluctuations Be Used for Pain Assessment in Nonverbal Patients With Traumatic Brain Injury? An Exploratory Study

Caroline Arbour, Melody Ross, Celine Gelinas; McGill University, Montreal, Quebec

Purpose

To explore the usefulness of intracranial pressure (ICP) for the detection of pain in nonverbal patients with a traumatic brain injury (TBI) during common nursing procedures in the intensive care unit (ICU). More specifically, fluctuations in ICP were recorded during 2 empirically tested procedures: (1) not nociceptive (ie, noninvasive blood pressure with cuff inflation, or NIBP) and (2) nociceptive (ie, turning). Exhibition of TBI patients’ pain-related behaviors was also documented.

Background/significance

In nonverbal patients, use of behaviors is strongly recommended for the assessment of pain. However, behaviors are not usable in patients who are heavily sedated or under the effects of blocking agents. ICP is frequently monitored in TBI patients and may be used as a potential physiologic indicator of pain in nonverbal critically ill adults. Indeed, ICP fluctuations were tested for the purpose of pain assessment in neonates—an average increase of 12 mm Hg was found after procedures known to be painful (eg, heel lance).

Method

A prospective repeated-measures design was used and 16 nonverbal TBI patients participated. For each procedure (ie, NIBP and turning), patients were observed before (baseline), during, and 15 minutes after for a total of 6 assessments. ICP fluctuations were recorded continuously at the bedside by using a data collection computer (Moberg-CNS monitor). A pretested behavioral checklist combining 50 items derived from existing pain assessment tools was used to identify patients’ behaviors during each assessment. Participants’ level of consciousness (ie, Glasgow Coma Scale, or GCS), severity of brain injury, and the administration of opioids 4 hours before data collection were documented.

Results

Patients were mostly males (73%), aged 45 years old on average, had GCS scores from 4 to 11, and were mainly admitted for a severe TBI (87%). Ten patients (67%) had a continuous infusion of fentanyl at a mean rate of 125 μg/h. Overall, mean ICP values remained stable from baseline to NIBP (−0.49 mm Hg; P=.73). ICP did not significantly increase during turning (+2.51 mm Hg; P=.16), even in patients who were not receiving a fentanyl infusion (+0.55 mm Hg; P=.90). Compared with rest, no changes in behaviors were observed during NIBP. In contrast, changes in behaviors observed during turning included eyes opening (47%), weeping eyes (20%), and repetitive movements of the upper/lower limb (33%).

RS12 Chest Tube Dressings: Outcomes of Taking Petroleum-Based Dressings out of the Equation on Air Leak and Infection Rates

Marian Jeffries, Christine Gryglik, Diane Davies, Sheila Knoll; Massachusetts General Hospital, Boston, MA

Purpose

To identify the rationale for eliminating petroleum-based dressings over chest tube sites after insertion in preventing air leaks and wound infections.

Background/significance

Conventional medical literature suggests that petroleum gauze dressings may be necessary after chest tube placement. Concern about practice based on tradition rather than current evidence necessitated a closer look at available data. The thoracic service at this metropolitan teaching institution stopped using petroleum dressings more than a decade ago, substituting petroleum gauze with an occlusive dry dressing. Anecdotal data were positive, and retrospective data from the Society of Thoracic Surgeons (STS) database collected in a 5-year period from 2005 to 2010 indicated that in 4361 thoracic cases requiring chest tube placement, only 134 air leaks (3.1%) and 21 wound infections (0.48%) were documented. The number of chest tubes placed after surgical intervention was not put into the equation (simply, the number of cases with chest tubes), and the STS data were reflective of all surgical thoracic procedures regardless of disease process. If 2 or more chest tubes were placed with each surgical procedure, the numbers would reflect (134 leaks in 8722 tubes placed) a much lower leak rate of 0.01%. Identifying a wound infection was not specific to the chest tube site itself, so a more specific thoracic surgical population was identified to review these data.

Method

A secondary retrospective study of 321 postoperative lobectomy cases using open thoracotomy and video assisted thorascopic surgery (VATS) for lung cancer during the 2-year period from January 2009 to December 2010 was conducted. Chart audits were completed to assess specifics of chest tube dressings applied and reapplied by health care providers and to assess for the presence of air leaks and wound infections related to the chest tube insertion sites.

Results

Combined data from the more specific lobectomy population with lung cancer indicated 26 leaks (8% leak rate) and 1 wound infection (0.3%), but the chart review indicated that none were attributed to the chest tube dressing applied or the insertion site itself.

RS13 Clinical Predictors of 30-Day Hospital Readmission After Acute Myocardial Infarction and Reasons for Readmission

Frances Flynn, Muhyaldeen Dia, Marilyn Osullivan, Carol Ziebarth; Advocate Christ Medical Center, Orland Park, IL

Purpose

To identify clinical predictors of 30-day hospital readmission in patients discharged with a principal diagnosis of myocardial infarction (MI) in a large, suburban tertiary medical center. A secondary purpose of this collaborative, multidisciplinary study was to explore reasons for patients’ readmission.

Background/significance

Numerous published reports indicate that the 30-day readmission rates for MI patients remain high nationally despite use of evidence-based medical management in the hospital setting. Few studies have been done to evaluate the relationship between patients’ clinical characteristics after MI and risk for hospital readmission or the specific reasons patients are readmitted within 30 days. Evidence-based strategies are needed to drive decreased readmission rates.

Method

A retrospective analysis using an internal administrative database was conducted to identify demographic and clinical predictors of 30-day hospital readmission in patients discharged with a principal diagnosis of MI from January 2006 to June 2010. Clinical variables were determined by secondary diagnoses and selected for analysis on the basis of frequency of occurrence and potential clinical value for risk stratification. A logistic regression model was used to determine which demographic and clinical variables were predictive of hospital readmission. Disposition following the initial hospital admission and reason for hospital readmission were analyzed on the basis of codes from the International Classification of Diseases (ICD).

Results

The sample was 2614 patients with a mean length of stay of 6.7 days; 70% had non–ST-segment-elevation MI and 53% were male. Variables that had significant, independent predictive value for readmission included first hospitalization length of stay, diabetes, hyperlipidemia, heart failure, and peripheral vascular disease. Readmission within 30 days occurred for 14% of the sample; 66% of readmissions occurred within the first 2 weeks. Forty-seven percent of reasons for hospital readmission involved a variety of comorbid conditions. An examination of cardiovascular reasons for readmission showed that more than 50% were related to heart failure. Most readmitted patients were initially discharged to extended care facilities.

RS14 Collaboration Through Clinical Integration: Evaluation of Hospitalized Patients’ Survival, Length of Stay, and Cost

Cheryl McKay; University of Texas at Tyler, Tyler, TX

Purpose

A Midwestern health care system designed a model of care delivery where collaboration was purposefully woven into the structures and processes to effect positive change in outcomes. Several of the health system hospitals adopted the Clinical Integration Model (CIM); others chose to stay with a traditional primary care model. Comparing hospitals within the health system provided an opportunity to determine if the groups differed in survival, length of stay, and cost.

Background/significance

Higher mortality rates and longer hospital stays have been found in environments where collaboration is limited or not present. As many as 98000 people die in hospitals each year as a result of medical errors that can be traced back to lack of collaboration and disjointed care. Empirical evidence supports collaboration, yet, little evidence shows how to create a collaborative environment.

Method

A retrospective nonrandomized comparative design using a convenience sample over a time-limited period was used to evaluate survival, length of stay (LOS), and cost for patients with the same diagnosis in a large hospital system in the Midwestern United States. Patients receiving care within hospitals that use the CIM were compared with those cared for in hospitals that use a traditional care delivery model. A sample of patients with congestive heart failure (diagnosis-related groups 291–293) admitted to the participating hospitals within the health system was used to assess patient and hospital outcomes of survival, LOS, and cost. An extant database (Eclipses/TSI) for operations management was used to capture all data elements.

Results

A 1-way analysis of variance was conducted to evaluate the effect of the CIM on LOS and cost. Unequal group sizes and violation of homogeneity of variance required evaluation using the Welch F statistic. The groups differed significantly for LOS, F3,245 =5.78, P =.001 and cost F3,226 = 21.70, P < .001. Post hoc evaluation using the Games-Howell procedure revealed a shorter LOS for both the intervention hospitals and significantly lower cost for the large intervention hospital. For the 1192 cases, no significant difference in survival was found (χ21 =.001, P =.98).

RS15 DARE to CARE About the Impact of Nursing and Organizational Characteristics on Pediatric Mortality in the United States

Patricia Hickey, Martha Curley, Kimberlee Gauvreau, Jean Connor; Boston Children’s Hospital, Boston, MA

Purpose

To link nursing, organizational, and unit level variables to in-hospital mortality for cardiac surgery patients across children’s hospitals in the United States.

Background/significance

Congenital heart disease is the most commonly occurring birth defect. Although variation in surgical outcomes for this population has been reported, little is known about the impact of nursing and organizational characteristics on mortality. Most existing studies have focused on the adult population and little is known about the impact of nursing in pediatrics.

Method

Nursing and unit characteristics from 38 children’s hospitals were obtained and then linked with patient level data for those less than 18 years old by using the Pediatric Health Information System (PHIS) for years 2009 and 2010. The Risk Adjustment for Congenital Heart Surgery (RACHS-1) method was used to adjust for baseline differences between patients when examining associations between nursing, organizational factors, and inhospital mortality for congenital heart surgery cases.

Results

Among 20 407 eligible cases, in-hospital mortality was 2.7%. The odds of death increased as the institutional percentage of pediatric intensive care unit (PICU) nurses with 2 years of clinical experience increased (odds ratio [OR]=1.12 for each 10% increase, P<.001) and in PICUs with dedicated unit educators (OR=1.63, P<.001). The odds decreased as the institutional percentages of nurses with a baccalaureate degree increased (OR=0.91 for 10% increase, P=.02), the institutional percentage of nurses with more than 11 years clinical experience increased (OR=0.89, P=.04), more than 16 years of clinical experience increased (OR=0.82, P=.006), and for hospitals participating in national quality metric benchmarking (OR=0.61, P<.001).

RS16 Dare to Walk the Walk

Michelle Nellett; Advocate Christ Medical Center, Oak Lawn, IL

Purpose

To test a mobility protocol in a cardiovascular surgical intensive care unit (ICU) by using the outcomes of length of stay (LOS), ventilation time, and patients’ perception. An interdisciplinary team collaborated to implement a mobility protocol to ensure that nurses viewed mobility as a core component of nursing care and to empower nurses and patients to focus and prioritize on therapeutic activity for patients.

Background/significance

The patient undergoing heart surgery is susceptible to muscle weakness, wasting, and debilitating effects. Many obstacles following heart surgery prevent postoperative ambulation, including mechanical ventilation, pain, and the multiplicity of catheters and equipment. Prolonged immobility can further increase mechanical ventilation time and length of stay (LOS). Early progressive mobility studies show that a collaborative team is necessary to promote quality mobility interventions.

Method

We used a quasi-experimental design with a convenience sample of ICU patients excluding all pregnant, transplanted, or transvenous paced patients. The protocol comprised 3 phases of activity ranging from an elevated head of bed to chair transfer and then ambulation. Progression was dependent upon the patient’s tolerance and not prevented by intubation status. Outcomes described were daily activity, vital signs, intolerance to activity, and the highest activity attained. Daily, the patients rated their activity perception on a visual analog scale (VAS). Mean ventilator hours and LOS in this study were compared with a retrospective cohort 6 months before protocol using a Student t test.

Results

Of 306 patients enrolled, 298 completed the study (56% male; mean age, 66.5 years). Evaluation revealed that 86.9% (N = 259) of patients completed phase III, being mobilized to the chair 3 times a day and ambulated a minimum of twice a day while in the ICU often within 12 hours after surgery. This patient-focused initiative ensured quality of care and safety. No falls, unplanned extubations, or accidental catheter removals occurred during mobility. VAS scores improved significantly (P<.001) during ICU LOS. ICU LOS was decreased between groups. Mean ventilator hours differed significantly (P<.001) between groups (study group, 35 hours; control group, 63.1 hours).

RS17 Depression and Anxiety in Patients After Coronary Artery Bypass Graft Surgery

Sheila Hanvey, Ellen Sorensen, Michelle Hansen; Meridian Health: Jersey Shore University Medical Center, Neptune, NJ

Purpose

At Jersey Shore University Medical Center, no screening tool is used to determine risk factors for depression and anxiety in the patients after coronary artery bypass graft surgery (CABG). Given that, more needs to be known about the correlates and risk factors for anxiety and depression in postoperative CABG patients, the aim of this study is to identify characteristics of patients that may be associated with the development of depression and anxiety in postoperative CABG patients.

Background/significance

According to Halpin and Barnett, more than 600 000 coronary artery bypass graft (CABG) procedures are performed annually in the United States. Rafanelli et al state that relief of angina and improvement in quality of life are principal indicators for this procedure; however, studies have shown that patients were not equipped for the intricacy of the bypass surgery and can experience distress with unrealistic expectations. Burg et al discuss that depression is detected in up to 61% of postoperative CABG patients. As noted by Murphy et al, many patients endure anxiety and depression in the period after bypass surgery and these patients may have more inferior outcomes than nondistressed patients. Postoperative CABG patients who are anxious or depressed are less likely to adhere to medical recommendations such as exercise and proper nutrition; practice self management such as monitoring weights, and proper medication administration; and even follow up and/or receive suggested cardiac testing.

Method

This study was approved by the Meridian Health institutional review board. Written consent was obtained. The Hospital Anxiety and Depression Scale (HADS) was administered to the 15 participants twice. Time 1 (T1) was during the hospital stay, postoperatively, and time 2 (T2) was via telephone interview 4 weeks after discharge. The HADS consists of 14 statements: 7 related to anxiety (A) and 7 related to depression (D). Responses are scored on a 4-point Likert scale with higher scores representing greater anxiety or depression.

Results

Fifteen patients participated in all aspects of data collection of the study. The sample was primarily male (60%, n=9), married (93.3%, n=14), and living with their spouse ( 93.3%, n=14). All subjects were white. Most (n=11) were retired (73.3%); 20% were employed (n=3), and 1 was unemployed. The mean age of the participants was 68 years, with the youngest person at 57 years and the oldest at 83 years. Male participants were older (mean age, 69 years) and female patients’ mean age was 65 years. Anxiety: Fifteen subjects completed the HADS-A at T1 and T2. The mean score for T1 was 7.5 (SD 2.72). The mean score for T2 was 4.0 (SD 2.19). Paired samples (dependent) t tests comparing T1 and T2 revealed a statistically significant decrease in anxiety (t=6.72, df=14, P<.001). Depression: The 15 subjects completed HADS-D at T1 and T2. The mean score for depression T1 was 2.53 (SD, 1.50). The mean score for T2 was 1.4 (SD, 0.98). Paired-sample t tests comparing T1 and T2 revealed a statistically significant decrease in depression (t=3.37, df=14, P=.005).

RS18 Early Detection of Acute Lung Injury in the Critically Ill: Testing the Need for Acute and Chronic Diagnoses

Srinivasan Vairavan, Ognjen Gajic, Caitlyn Chiofolo, Gregory Wilson, Man Li, Guangxi Li, Adil Ahmed, Theodore Loftsgard, Nicolas Chbat, Rahul Kashyap; Philips Research North America, Briarcliff Manor, NY

Purpose

Acute lung injury (ALI) is a devastating complication of acute critical illness and one of the leading causes of multiple organ failure and mortality in the intensive care unit (ICU). Early identification of ALI can facilitate timely implementation of evidence-based therapies. We propose a mathematical model for the early detection of ALI and test the model’s ability in the presence and absence of textual information regarding patients’ chronic and acute disease diagnoses.

Background/significance

In disease model development by using retrospective data, we use the entire patient record including diagnoses, which give insight into a patient’s current health state. For instance, ALI has several risk factors such as sepsis, aspiration, and trauma which are in the electronic medical record (EMR) but are unfortunately difficult to extract. As such, we test our model’s early detection of ALI with and without chronic and acute conditions.

Method

The ALI model leverages both clinical knowledge and a retrospective EMR dataset collected from mixed ICUs in a tertiary center. Two sets of algorithms were developed through translation of clinical knowledge into mathematical expressions and through data mining for ALI risk modifiers. The model aggregated the 6 algorithms to generate 1 ALI development score. Two independent reviewers retrospectively determined the gold standard diagnosis by using the American-European Consensus criteria. Two simulations were run: with and without acute and chronic diagnoses. Performance metrics such as sensitivity, specificity, and positive predictive value (PPV) were used for analysis.

Results

Training data comprise 206 ALI and 300 controls, whereas the validation includes 31 ALI and 3858 controls. With acute and chronic conditions, the ALI model achieves 87% sensitivity, 83% specificity, and 36% PPV. In addition, the model detects 70% of ALI patients a median of 7.5 (IQR, 2.4–38.4) hours before the gold standard. Without acute and chronic diagnoses, the ALI model achieves 71% sensitivity, 83% specificity, and 32% PPV. In this scenario, the model detects 59% of ALI patients a median of 8.25 (IQR, 2.0–37.2) hours early. Acute and chronic diagnoses improve the model’s sensitivity and the percentage of patients detected early, but the timing of detection is comparable in both simulations.

RS19 Early Mobility in the Intensive Care Unit: Changing Unit Culture and Improving Patients’ Outcomes

Amy Doroy, Dawn Love; University of California, Davis, Medical Center, Sacramento, CA

Purpose

To reduce physical and cognitive morbidities after a stay in the intensive care unit (ICU) through focused interventions. Patients in the ICU are exposed to deep sedation and prolonged immobility with physical therapy not occurring until the patient moves out of the ICU. The detrimental physical effects of bed rest include insulin resistance, thromboembolic disease, and atrophy of muscle. The cognitive deficits reported by ICU survivors include difficulty planning/organizing, paying attention, and memory loss.

Background/significance

The Moore Foundation has awarded UCDMC an “ICU-Awakening and Breathing Coordination, Delirium Monitoring, and Exercise/Early Mobility” (ABCDE) grant. This evidence-based practice intervention supports ICU patients to be placed in a protocol that includes sedating patients less deeply when possible, frequently assessing them for pain and signs of delirium, and getting them up and moving early in the hospitalization to help rebuild their mental and physical health.

Method

The study is a cohort study with 3 ICUs participating in the ABCDE bundle for early mobility compared with 4 nonparticipating units. The study is measuring hospital length of stay, ICU length of stay, ventilator days, mortality rates, and compliance with bundle elements.

Results

Between April and August 2012, 78 patients were discharged directly home, compared with only 62 patients during the same period the preceding year. The mean length of stay during this time decreased from 14.7 days to 11.4 days. There was a net gain in revenues of $576 725 during this 4-month period in comparison to the same 4 months the preceding year.

RS20 Effect of a Three-Times-a-Day Patient Hand Hygiene Protocol in the Intensive Care Unit to Decrease Hospital-Acquired Infections

Cherie Fox, Teresa Wavra; Mission Hospital, Mission Viejo, CA

Purpose

To determine if a 3-times-a-day patient hand hygiene protocol in the intensive care unit (ICU) would decrease hospital-acquired infections (HAIs), specifically catheter-associated urinary tract infections (CAUTIs) and central catheter–associated bloodstream infections (CLABSIs). The study team wanted to know if a protocol to clean patients’ hands would decrease the spread of pathogens that contaminate catheters and cause HAIs.

Background/significance

HAIs affect more than 2.5 million patients in the United States alone. ICU patients are at an added risk for HAIs developing. Published reports support the benefits of having health care workers wash their hands; what has not been studied is the effect of a patient hand hygiene protocol on HAIs. Decreasing transient flora on patients’ hands may decrease the spread of pathogens that cause HAIs. Decreasing HAIs improves patients’ outcomes and lowers financial costs for hospitals.

Method

A quasi-experimental (pretest/posttest) study design was chosen. Three phases of investigation were determined: 12 months before implementation of the protocol, a 6-week training period, and 12 months after implementation of the protocol. Patients’ age, sex, hospital length of stay (LOS), and daily census were analyzed to compare patient-related variables that might contribute to differences in HAI rates. During the study year, 2326 patients were enrolled in the study. The hand hygiene protocol was a 3-times-a-day patient hand hygiene protocol that used 2% chlorhexidine. The CLABSI and CAUTI rates were tracked and benchmarked by epidemiologists using data from the Centers for Disease Control and Prevention data from 2006 to 2008 for CAUTIs and from 2009 for CLABSIs.

Results

During the intervention period, CLABSI and CAUTI rates decreased drastically. CLABSI rates before the protocol was implemented were 1.1 per 1000 catheter-days and decreased to 0.5 per 1000 catheter-days after implementation. CAUTI rates before the protocol was implemented were 9.1 per 1000 catheter-days and decreased to 5.6 per 1000 catheter-days afterward. The study team speculated that a hand hygiene protocol for patients would result in an increase in hand-washing rates among health care workers. The study team monitored this in addition to the CLABSI and CAUTI rates and found that nurses’ handwashing compliance increased from 62% before the protocol to more than 90% during the intervention period.

RS21 Effect of Backrest Elevation on Tissue Interface Pressure and Pressure Ulcer Formation During Mechanical Ventilation

Mary Jo Grap, Angela Bataille, Christine Schubert Kabban, Cindy Munro, Paul Wetzel; Virginia Commonwealth University, Richmond, VA

Purpose

Pressure ulcers and ventilator-associated pneumonia (VAP) are both prevalent and costly. To reduce risk for pressure ulcers, backrest positions less than 30° are recommended. However, to reduce VAP, positions greater than 30° are recommended. Higher backrest elevations may reduce VAP, but few data describe the effect of higher backrest positions on pressure ulcer formation. This longitudinal study in adults receiving mechanical ventilation describes the effect of backrest elevation on skin pressure and tissue integrity.

Background/significance

Use of higher backrest positions intended to reduce VAP has not been studied to determine the effect on factors that promote pressure ulcers, specifically pressure. There are no data that describe skin pressure over time in critically ill patients receiving mechanical ventilation. Pressure is a primary factor that promotes development of pressure ulcers. To reduce interface skin pressure recommendations, one should maintain the head of the bed at the lowest level of elevation and limit the amount of time the head of the bed is elevated.

Method

Patients from 3 adult intensive care units who were receiving mechanical ventilation were enrolled within 24 hours of intubation. Backrest elevation (inclinometer) and pressure (XSENSOR pressure mapping system) were measured continuously for 72 hours. Tissue integrity was measured every 12 hours by skin observation (National Pressure Ulcer Advisory Panel staging system) and an objective measure of tissue injury (EPISCAN, Longport, Inc). In this preliminary analysis, to date, descriptive statistics were used to examine the relationship among backrest elevation, interface pressure, and skin integrity. Stepwise, multivariate repeated-measures models will describe the relationships among backrest elevation levels, skin pressure, and tissue integrity over time.

Results

150 patients (mean age, 55 years; 57% male; 45% white, 48% African American) were enrolled from medical (41%), surgical (32%), and neurosurgical (27%) intensive care units. Mean backrest elevation was 24.7° (median, 25.7°); 78.2% of backrest elevation observations were less than 30°; 21% between 30° and 45°, and 0.8% more than 45°. Mean interface pressure over the body surface area was 21.5 mm Hg (median, 21.3), with mean highest peak pressure over the body surface being 249.6 mm Hg (range, 59.8–256 mm Hg). As backrest elevation increased, body surface mean interface pressure decreased. One or more skin changes developed in 6 patients during the study (4 sacrum, 2 trochanter, 1 scapula, 1 heel).

RS22 Effect of In-Service Training on Pain Documentation

Nora Balke; Anaheim Regional Medical Center, Anaheim, CA

Purpose

To determine the effect of an in-service training session about institutional documentation requirements on nursing documentation related to pain management activities within patients’ medical records.

Background/significance

Pain is often a significant issue for critical care patients. Nursing documentation is often not reflective of the scope of care given during the shift. Because the patient’s chart serves as a means of communication between health care professionals, as a legal ledger of care rendered, and as a recording of each part of the nursing process, it is imperative that documentation be accurate and complete. Thorough documentation of pain management is crucial for optimizing patients’ comfort.

Method

A quantitative approach was used. The nursing documentation was audited for the presence or absence of 7 facility-required pain management activities before attendance at an in-service training session on hospital policy. These activities were presence of a full pain assessment with vital signs, patient education on pain, treating pain scale intensities of 4 or greater, presence of full pain assessment with each analgesia administration, reassessment of pain 30–60 minutes after intervention, pain as a diagnosis on the nursing plan of care, and an update of the plan of care every 72 hours. Participants then attended an in-service training session. After the training, audits were completed for the same activities.

Results

For each of the 7 pain management activities, the frequency of charting increased. Full assessment with vital signs increased from 31.7% of the time to 96.7% of the time. Patient education each shift increased from 13.3% to 83.3%. Treatment of pain scores of 4 or greater increased from 89.1% to 100%. Pain assessment with each analgesia increased from 60.4% to 100%. Reassessment of pain increased from 67.3% to 98%. Pain as a diagnosis on the nursing plan of care increased from 96.5% to 100%. Updating of the plan of care increased from 92.3% to 98.2%.

RS23 Effect of Self-Positioning During Changes in Backrest Elevation on Interface Pressures

Anathea Pepperl, Mary Jo Grap, Angela Bataille, Melissa Rooney, Ruth Burk, Christine Schubert Kabban; Virginia Commonwealth University, Richmond, VA

Purpose

Self-positioning serves to decrease skin interface pressures and relieve discomfort from compressive forces. Patients receiving mechanical ventilation (MV) are often sedated and may be impaired in their ability to reposition in response to discomfort. Additionally, MV patients are often held at higher backrest elevations to prevent aspiration. This study describes the effect of patients’ self-positioning on interface pressures after a change in backrest elevation.

Background/significance

Although higher backrest elevation and decreased mobility are factors associated with increased interface skin pressure, no data are available that describe how these factors may interact. Patients who are capable of independently repositioning their body had a lower incidence of pressure ulcers. MV patients, however, are often held at higher backrest positions in order to reduce risk for VAP. MV patients are also often sedated, which may put this population at higher risk of pressure ulcers developing.

Method

Fifty healthy participants were recruited from our university population. Participants simulated a deeply sedated patient (unable to reposition self) laying in a standard hospital bed. Backrest elevation was set at 30°, 45°, or 60° while activity level, backrest elevation, and interface pressures were recorded continuously for 30 seconds. Each participant then simulated an alert patient (able to reposition self if experiencing discomfort). Data were recorded for an additional 30 seconds. This procedure was repeated for each state and angle condition. Random effects models were used to examine the effects of backrest elevation and state on mean and peak pressure.

Results

Participants had a mean age of 30 years; 18% were male and 10% were African American. A significant interaction was found between condition and angle as related to mean pressure (P<.001) and peak pressure (P<.001). The sedate group had lower mean pressure and lower peak pressure than the alert group for all backrest elevations. Increases in backrest elevation increased mean pressure and peak pressure. Mean pressure ranged from 22.8 to 24.7 mm Hg, whereas peak pressure ranged from 77.1 to 101.8 mm Hg. Participants’ body mass index was significantly related to mean pressure (P<.001) and peak pressure (P<.001). Higher body mass index was associated with higher mean pressure, but lower peak pressure.

RS24 Effects of Patient’s Position and Operator on Quality of High-Frequency Ultrasound Scans

Ruth Burk, Anathea Pepperl, Angela Bataille, Melissa Rooney, Christine Schubert Kabban, Mary Jo Grap; Virginia Commonwealth University, Richmond, VA

Purpose

Critically ill patients are at high risk for tissue damage (pressure ulcers) caused by decreased mobility, activity, and sensation. The use of high-frequency ultrasound (HFUS) may allow identification of sacral deep tissue injury, but little is known about the effect of the patient’s position and use of multiple operators to obtain the HFUS images. The goal of this study was to investigate the quality of images with respect to patient’s position and the consistency of images among multiple operators.

Background/significance

HFUS scans of sacral regions can be difficult to obtain in critically ill patients as the probe should be held perpendicular to the patient’s skin. This is most easily accomplished when patients are turned on their side or lying prone, but may not be feasible for critically ill patients with various medical therapies. Because this technology is designed to be used at the bedside, HFUS images may be obtained by multiple operators with limited training or experience.

Method

Healthy volunteers (n=50) were assisted to assume 3 different positions: prone, 90°, and 60° lateral on the left side. HFUS images were obtained in each position with a 20-MHz ultrasound scanner after palpation of the coccyx. Three study personnel in randomized order performed sacral ultrasound scans in each position for each volunteer. Images were analyzed by using a scan quality rating from 1 (poor) to 4 (best). Summary statistics were used to describe the sample. Random effects models were used to examine the effects of operator and position on global quality rating, dermal thickness, and median intensity scores as a proxy for density measurement.

Results

A total of 957 HFUS images were analyzed. Operator (P<.001) and position (P=.001) had significant effects on Episcan rating. Patients in 60° positions had poorer quality scans than patients in the 90° or prone positions (P=.01 and .002, respectively); there was no difference between 90° and prone position ratings. HFUS quality varied by operator (P<.001). Mean quality ratings between operators ranged from 3.45 to 3.59, whereas mean quality ratings across positions ranged from 3.50 to 3.57. Ratings greater than 3.49 were considered adequate for evaluation. Dermal thickness in the prone position was significantly less than in other positions (P<.001).

RS25 Evaluating Effectiveness of Cleansing Solutions on the Reduction of Bacterial Load on Intravenous Mechanical Valve Catheters

Barbara Ehrhardt, Linda Dempsey, Judith Berra, Christine Savage, Wendi Fox; University Hospital, Cincinnati, OH

Purpose

To compare the disinfection effectiveness of sterile water, 70% alcohol, and 3.15% chlorhexidine in 70% alcohol after 3-, 10- or 15 second scrubs.

Background/significance

Contamination of intravenous hubs can lead to catheter-related bloodstream infections increasing morbidity, mortality, hospital lengths of stay, and costs. The most effective hub disinfectant and scrub time needs to be identified.

Method

A laboratory study using a total of 132 claves divided into 11 sample groups with 12 in each group. There was 1 negative control and 1 positive control. All of the claves, except the negative control, were contaminated with a 1.0 McFarland standard solution of Staphyloccus aureus, Staphylococcus epidermis, Escherichia coli, and Pseudomonas aeruginosa. Claves were scrubbed with either sterile water, 70% alcohol, or a 3.15% chlorhexidine in 70% alcohol for 3, 10, or 15 seconds.

Results

After 3- and 10-second scrubs, alcohol and chlorhexidine performed equally well, and both were more effective than sterile water. The 15-second scrubs were most effective, and alcohol was superior to chlorhexidine at this scrub time.

RS26 Forcing the Function: Evaluation and Implementation of an Intravenous Port Protector to Reduce Infections

Mary Davis; Legacy Health Good Samaritan Medical Center, Portland, OR

Purpose

To determine the influence of the Curos port protector on hospital-acquired bloodstream infection and contaminated blood culture rates.

Background/significance

Despite multiple reduction strategies throughout our 5-hospital system, we reported 39 central catheter–associated bloost-ream infections (CLABSIs) in 2011. The practice of scrubbing the hub for 15 seconds with every intravenous access was impractical, and compliance with the recommendation was poor. The Curos port protector, a cap impregnated with 70% isopropyl alcohol, was introduced as an effective strategy for reducing CLABSIs. The nursing team initiated a formal product evaluation to determine if this device would decrease the incidence of CLABSIs.

Method

Three adult ICUs, 1 medical oncology unit, and presurgery, operating room, and postanesthesia recovery units participated in the 6-month study. Curos caps were placed on all peripheral and central intravenous catheter ports immediately after catheter placement (a total of 89400 Curos caps were used). Use of Curos as indicated by the manufacturer’s instructions was monitored on a weekly basis. The rates of CLABSIs and contaminated blood cultures were tracked and compared with rates during the same 6-month period the preceding year.

Results

Compliance with covering all catheter access ports with the product ranged from 82% to 100%. Contaminated blood cultures decreased from 3.44% to 1.65%. CLABSI rates decreased 63% compared with rates from the previous year. A minimum estimated cost savings of $315900 was calculated should the Curos product be implemented system-wide. Nurses overwhelmingly supported the use of the product. Results were shared with the Critical Care Quality Committee, CLABSI taskforce, and the Executive Supply and Equipment Committee.

RS27 Horizontal Violence in the Nursing Workplace: Beyond Oppressed Group Behaviors

Therese Mendez; University of New Orleans, New Orleans, LA

Purpose

Nurse researchers have attributed horizontal violence to oppression of nursing as a profession. Scholars have examined horizontal violence and developed the theoretical connection between these behaviors and oppression. The purpose of this study was to listen to the stories that nurses told about horizontal violence and to develop an alternative explanation for horizontal violence based on the perspectives of the individuals involved in these events.

Background/significance

Evidence indicates that the majority of working nurses will experience horizontal violence during their careers. Since 1983, these negative behaviors between nurses have been attributed to oppression. Horizontal violence is a complicated behavior dynamic that occurs between nurses practicing across different cultures and health care delivery systems. An alternative explanation may increase understanding of behaviors that are damaging to nurses and the patients involved in these events.

Method

Grounded theory was chosen for this study in order to examine what circumstances, from the nurse participant’s perspective, are associated with episodes of horizontal violence in the workplace. Qualitative methods are used to examine the meaning that people ascribe to social interactions and their explanations for why they respond in different ways in different contexts. Grounded theory methods were used to examine and analyze the data obtained from nurses as they recalled and reflected on their experiences with negative interactions between colleagues in the workplace.

Results

The nurses believed that they provide patient care in unpredictable environments. Perceived threats to patient care, including nurses judged to be unreliable, may be met with efforts aimed at removing a nurse from the group. These efforts may through informal group sanctions ranging from isolation of the target to overt hostility. Horizontal violence is intended to stabilize the patient care environment by “running off” the target nurse. Study participants acknowledged that a target nurse would see the hostile behaviors as horizontal violence and not as patient advocacy. However, they believed that these hostile actions are necessary, at times, to “get the job done.”

RS28 Improving Caregivers’ Perceptions Regarding Patient Goals of Care and End-of-Life Issues

Carrie Sona, Marilyn Schallom, Lee Skrupky, Brian Wessman, Jennifer Aycock, Pat Baker, Catherine McHugh, Leasa Machamer, Bonnie Bausano, Elizabeth Dykeman; Barnes Jewish Hospital, St Louis, MO

Purpose

To create a novel, all-inclusive, intensive care unit (ICU)–based program focused on goals of care/end of life (GOC/EOL) with the ultimate goal of providing a multidisciplinary communication approach for the families of critically ill patients. This effort would result in an improved comfort level of ICU staff when dealing with discussing and administering EOL care, as well as transitioning to a comfort care approach.

Background/significance

The figures regarding care provided in the critical care setting are staggering, with 1 in 5 US patients dying in the ICU. Critical care providers, specifically physicians, are inadequate at discussions focused on GOC/EOL. With the projected growth in the aging population, the EOL phenomenon in ICUs will continue to be a component of critical care medicine. The proposed ICU team intervention regarding GOC/EOL communication would improve the clinical abilities of all critical care providers when discussing issues related to intensity of care.

Method

This study was done at an academic tertiary surgical 24-bed ICU with a mean of 15 deaths per month. An initial survey was circulated among the critical care staff for baseline expectations, satisfaction, and understanding of GOC/EOL care. A robust intervention was begun that included a subcommittee focus team, communication tools for providers, patient family pamphlets, standardized EOL order sets, and formalized didactic sessions (based partly on the EOL Nursing Consortium curriculum). Subsequently, the same survey was circulated with comparison to baseline data.

Results

The intervention was provided to nursing, ancillary staff, house staff, and attending physicians. It generated heightened interest in improving family communication and provided focal direction to foster this growth. Based on the serial surveys, specific staff improvements were seen in caregiver knowledge regarding ability to promote EOL care and family perceptions regarding GOC communication. Improved congruence of families and health care providers regarding decisions about intensity of care also were noted.

RS29 Improving Retention, Confidence, and Competence With an Evidence-Based Nurse Residency Program

Jean Shinners; Versant, Ithaca, NY

Purpose

As hospitals continue to place new graduate registered nurses in the intensive care setting, there must be a structured time to support their transition to practice. The purpose of the study was to investigate the results of evidence-based standards and strategies in the development of a nurse residency and to measure their effect on nurses’ perceived competency, confidence, and turnover intent.

Background/significance

Initiation into nursing can be difficult for nurse graduates, with turnover rates as high as 60%. This leads to nursing “churn,” which is both costly and detrimental to staff morale and patient safety. Although new nurses may have a foundation of academic knowledge, most do not have the skill set needed to ensure safe patient care during their initial months of practice. Common issues include satisfaction, professional role development, and ultimately, patient safety and quality of care.

Method

Research presented is from a longitudinal, descriptive study with data collected during a 10-year period. More than 6000 new graduate nurses were a part of the study. Validated measurement instruments included individual, component, and nurse evaluations; status reports; focus groups; and surveys. Outcomes of the nurse residency were analyzed by using a wide variety of metrics. Analysis included data reduction and multiple imputation, correlation matrix analysis, and regression analysis.

Results

Three main themes emerged: (1) turnover rates decreased during the 10-year period as best practices were identified and implemented. (2) Competency observation during the 18-week residency immersion periods showed significant progress. At the end of the nurse residency, the mean observed rating was equal to or higher than the observed rating of the comparison groups, who had a mean experience of 17.1 months. (3) The Skills Competency Self-Confidence Survey completed by residents from the beginning of the program through month 60 after the residency revealed that new graduates who are provided support throughout the residency show a correlational increase in self-confidence.

RS30 Initial Experience With Continuous Intra-arterial Fluorescent Glucose Monitoring in Post-surgical Intensive Care Patients

Mary Librande, Simon Bird, Jeffrey Joseph, Paul Strasma, Marjolein Sechterberger; GluMetrics, Irvine, CA

Purpose

A study of the GluCath Intravascular Continuous Glucose Monitoring System (IV-CGM) was done (1) to measure the CGM’s accuracy relative to a lab-quality analyzer (rather than a capillary glucometer) and (2) to assess the CGM’s qualitative ease of use and workflow fit. Data are reported for 5 lead-in subjects (of 20 intended) for each of 3 ICUs: Royal North Shore Hospital, Sydney, Australia; Thomas Jefferson Hospital, Philadelphia, Pennsylvania; Onze Lieve Vrouwe Gasthuis, Amsterdam, the Netherlands.

Background/significance

The NICE-SUGAR study reported a modest increase in mortality associated with a tight blood glucose (BG) target range of 80 to 110 mg/dL. The cause of the increase is not known, but the increased incidence of hypoglycemia and increase in blood glucose variability observed with intensive control are considered possible contributing factors. A CGM could potentially assist critical care nurses in efficiently addressing dangerous trends in ICU patients’ glucose levels.

Method

IV-CGM sensors were inserted post-surgically via an existing 20-gauge catheter in the radial artery. The same catheter was used for continuous pressure monitoring. Periodical ultrasound scans were done to assess the vessel’s reaction to the sensor; specifically, thrombus formation. CGM glucose level was recorded each minute for 24 hours, but that information was not made available to the clinical staff. Arterial blood samples were collected for clinical management according to the hospital’s policy and study protocol, every 1–6 hours. BG was measured on a Radiometer ABL Blood Gas Analyzer to provide more accurate reference values than traditional glucometer systems used to measure capillary blood.

Results

Fifteen of 17 sensors were successfully deployed and prospectively calibrated; 2 sensors were replaced upon insertion because of leaks. Unrelated to the study, 1 patient had a pulmonary arrest; the CGM functioned during surgical intervention. At 1 site, 3 of 5 arterial catheters lost patency after only 6 to 8 hours. Out of 243 BG measurements, 202 (83%) were within 20% of the reference values across the range from 79 to 265 mg/dL (4.4–14.7 mmol/L). Some IV-CGM results were affected by catheter flushes and device securement. The CGM did not interfere with clinical care, blood pressure monitoring, or arterial blood sampling. No unexpected or serious adverse device effects occurred.

RS31 Intensive Care Delirium Screening Checklist As an Alternative to Confusion Assessment Method for the Intensive Care Unit in Adults Receiving Mechanical Ventilation

Takeshi Unoki, Ryuichi Yotsumoto, Hideaki Sakuramoto, Takeharu Miyamoto; University of Tsukuba Hospital, Tsukuba, Ibaraki, Japan

Purpose

To evaluate the Japanese-translated Intensive Care Delirium Screening Checklist (ICDSC) as an alternative tool for the Confusion Assessment Method for the Intensive Care Unit (CAM-ICU) to detect delirium in ICU patients receiving mechanical ventilation.

Background/significance

Studies indicate that ICU staff frequently fail to recognize delirium without use of an objective assessment tool. The CAM-ICU has been validated for use in patients receiving mechanical ventilation. Unlike the CAM-ICU, the ICDSC is a simple checklist that does not require the patient’s cooperation, and although it is used in patients receiving mechanical ventilation, its validity in this population is not well-established.

Method

We developed a Japanese version of the ICDSC by using a back-translation method. A convenience sample of adult patients receiving mechanical ventilation who were admitted to a medical-surgical ICU or an ICU specializing in emergency medicine was used. Patients with neurological disease, persistent coma, or who were receiving neuromuscular blockade were excluded. Assessments were made only when the patient’s score on the Richmond Agitation Sedation Scale was greater than −4. The researcher assessed the CAM-ICU, while the bedside nurse assessed the ICDSC independently. Both assessors were blinded to each other’s assessments. Delirium was defined as a score greater than 3 on the ICDSC.

Results

Forty-seven patients were assessed, resulting in 152 paired delirium assessments. Patients (mean [SD] age, 67 [13] years) were receiving sedatives in 96% of assessments. Ninety-seven percent of assessments were with subjects’ with scores between −3 and 0 on the Richmond Agitation Sedation Scale. Delirium was detected in 85 assessments (56%) by the CAM-ICU and in 78 assessments (52%) by the ICDSC, with an agreement rate of 67%. The sensitivity of the ICDSC was 68% and its specificity was 69%.

RS32 Measuring the Complexity and Autonomy of Nursing Care in the Pediatric Cardiac Intensive Care Unit by Using the CAMEO Tool

Jean Connor, Christine LaGrasta, Patricia Hickey; Boston Children’s Hospital, Boston, MA

Purpose

To develop a tool to measure the care provided by pediatric intensive care nurses that will align the needs of the bedside clinicians with nursing leaders to justify and support appropriate resource utilization. The Complexity Assessment and Monitoring to Ensure Optimal Outcomes (CAMEO) tool was developed to abstract patient-level data and management and nursing documentation from the medical record in real time to automatically calculate the nursing CAMEO classification score.

Background/significance

Quantifying the value of nursing in the inpatient setting has led to the development of numerous tools describing nursing workload, intensity, and resource use in adult intensive care units. All lack the ability to quantify and qualify the complexity of nursing care that is required in pediatric intensive care units. Additionally, these tools do not allow for the real-time assessment of the care that guides efficient and effective staffing models in terms of numbers as well as skill of the nurse.

Method

Expert pediatric cardiac intensive care nurses from a large tertiary care free-standing children’s hospital used the Delphi method to identify 19 domains of care (eg, assessment, monitoring, management, procedure, intervention). Each care description item within the domain was scored from 1 to 5 to indicate its level of cognitive complexity. Scores from the 19 domains were calculated and totaled. Based on this final score, the patient was categorized into a complexity class of I to IV, where a class I patient is physiologically stable and a class IV patient is physiologically unstable, requiring a number of interventions. The tool was retrospectively studied in a cohort of 75 patients.

Results

The patients ranged from 1 day to 47 years old, with 86% admitted after surgical intervention and 14% admitted for medical intervention. Forty-two percent required assessment and monitoring more than once an hour. Titration of dosages of intravenous vasoactive medications occurred with 78% patients. Ventilator patients (72%) required a number of interventions to maintain airway patency and acid-base balance. Nursing activities frequently reported indirect care activities. Using the final complexity classification categories, 10% of patients were class I, 20% class II, 21% class III, and 49% class IV.

RS33 Moral Distress and Ethical Climate in a Pediatric/Neonatal Intensive Care Unit: Developing a Moral Community

Kathleen Marotta, Mary Peinemann, Catherine Robichaux; University Health System, San Antonio, TX

Purpose

Experiences of moral distress can jeopardize core values essential to nurses’ integrity. Over time, these compromises can lead to moral desensitization or leaving the profession. The impact of organizational climate on moral distress has been studied in nurses working with adult populations but remains unexamined in pediatric and neonatal units. This study explored moral distress, perception of ethical climate, and moral residue among registered nurses in acute pediatric and neonatal units.

Background/significance

This study is unique in that it explores the relationship of ethical climate to moral distress. Our intent was to develop interventions at the individual and organizational levels. In the past, interventions have focused on individual health care providers. However, an alternate way to address moral distress is to view individuals as members of a community. Efforts to address moral distress should focus on understanding the ethical and social structure of the organization as a moral community.

Method

Following institutional review board approval, a mixed method (quantitative and qualitative) exploratory, descriptive, nonexperimental research design was employed. Survey tools included the 20-item pediatric/neonatal Moral Distress Scale (MDS) and the 26-item Hospital Ethical Climate Survey (HEC). Two open-ended questions were included to elicit experiences of potential moral residue. A convenience sample of 152 nurses was invited to participate via Survey Monkey. Initial analysis included determination of overall mean for the MDS and HEC and individual item means and frequencies for the MDS. This study was part of a larger study that included nurses working with adult populations.

Results

Most participants (n=53) were female; aged 30 to 49 years, BSN prepared, and had 10 or more years of experience. Moral distress intensity (mean, 1.04; SD, 91) and frequency (mean, 0.70; SD, 0.55) was lower than reported in 1 published study of neonatal nurses. Items associated with highest moral distress were unsafe staffing levels and perceived incompetence of coworkers. Items with highest frequency were perceived physician incompetence and life support/treatment conflicts. HEC was rated moderately high (mean, 97.3; SD, 18.3; range, 26–130). Increased MDS distress and frequency were inversely related to perceptions of a positive ethical climate. One participant responded to the open-ended question.

RS34 No Pressure in the Trauma Bay: Use of a Silicone Foam Dressing on Trauma Patients in the Emergency Department

Celestine Parker, Andres Viles; University of Alabama at Birmingham Hospital, Birmingham, AL

Purpose

To examine the early application of a soft silicone foam dressing to the sacral area of patients admitted to the trauma and burns intensive care unit (TBICU) through the emergency department to prevent sacral pressure ulcers in trauma patients who are immobilized on backboards, slide boards, and/or procedures tables for an extended period.

Background/significance

Despite advancement in prevention of pressure ulcers in hospitalized patients, little research has been done on pressure ulcer prevention in trauma patients admitted through the emergency department. This population spends an extended period of time on hard surfaces, which could lead to a deep tissue injury (DTI) that could progress to a stage III or IV pressure ulcer.

Method

A convenience sample of trauma patients admitted through the emergency department was evaluated for eligibility. The dressing was applied in the emergency department and the follow-up occurred in the TBICU.

Results

A total of 81 charts were reviewed for documentation of the dressing application. A total of 56 patients had the dressing applied. Less than 1% of these patients a had a sacral pressure ulcer develop. Twenty-five patients did not get the dressing; of these patients, 36% had a sacral pressure ulcer develop.

RS35 Nonventilated Patients Also at Risk for Hospital-Acquired Pneumonia in the Intensive Care Unit

Barbara Quinn, Dina Gripenstraw, Dian Baker, Carol Parise; Sutter Medical Center, Sacramento, Sacramento, CA

Purpose

To determine the incidence and significance of nonventilator hospital-acquired pneumonia (NV-HAP) in the intensive care unit (ICU). Ventilator-associated pneumonia (VAP) has been well studied. Prevention bundles, such as the one recommended by the Institute for Healthcare Improvement, have dramatically reduced the incidence of VAP across the nation and in the ICUs in our institution. However, little is known about the incidence of NV-HAP in the ICU.

Background/significance

With VAP slated to become one of the new Joint Commission’s national patient safety goals (NPSGs) in January 2013, time and resources are committed to prevention and measurement of success. However, there are currently no requirements to monitor NV-HAP. The limited studies available indicate that NV-HAP is an emerging factor in prolonged hospital stays, patient morbidity/mortality, and increased cost of up to $65 000. If NV-HAP is occurring in the ICU, prevention efforts should be expanded to include ICU patients.

Method

This descriptive, quasi-experimental study used retrospective data to determine the incidence, demographics, and clinical factors of NV-HAP. The consolidated standards of reporting trials (CONSORT) research methods were used. NV-HAP data were obtained from a large, urban hospital’s electronic integrated medical management system. Inclusion criteria were all adult discharges between January 1, 2010 and December 31, 2010, with diagnostic codes of pneumonia–not present on admission and meeting the Centers for Disease Control and Prevention’s (CDC’s) definition for HAP. NV-HAPs were then attributed to either medical/surgical or ICU, on the basis of the date of clinical onset.

Results

A total of 24 482 patients comprising 94 247 patient-days were eligible for study inclusion. There were 14 396 adult ICU days and 79 851 adult medical/surgical days. In the ICU, 35 cases met the CDC’s definition of NV-HAP with an infection rate of 2.43 per 1000 patient-days. During the same period, 80 cases met the CDC’s definition of NV-HAP in the medical/surgical units, for an infection rate of 1.0 per 1000 patient days. In contrast, there were 5377 ventilation days with an infection rate of only 0.19 per 1000 ventilation-days. Demographics and risk factors for HAP were similar between ICU and medical/surgical patients.

RS36 Nurses’ Perspectives Related to Nursing Shift-Change Bedside Report

Nicole Steenrod; OSF St Francis Medical Center, Peoria, IL

Purpose

Nursing shift-change bedside report (NS-CBR) facilitates an update on the patient’s condition that allows active participation by the patient. Nurses are actually empowering patients to control some of their health care by allowing them to participate in NS-CBR. NS-CBR has been studied to assess the best practice for patient care. Questions still remain regarding the effectiveness of bedside report and nurses’ perspectives toward it. The purpose of this study was to identify nurses’ perspectives of NS-CBR.

Background/significance

Patient handoff is a critical and essential tool used within health care. The Joint Commission listed standardized hand-off communication as patient safety goal 2E: “Implement a standardized approach to handoff communications, including an opportunity to ask and respond to questions.” Patient handoff is a communication tool that can promote patients’ safety and effective patient care. The Joint Commission identified that 60% of root-cause analysis and sentinel events were caused by communication errors.

Method

Qualitative methods associated with phenomenology were used to describe nurses’ perspectives of NS-CBR. The population consisted of registered nurses who worked full-time for at least 6 months on a 36-bed cardiac acute care floor at a large Midwestern trauma I hospital. Interviews were conducted in a private room outside of the acute care area. Questions for the interviews were modified from the Clinical Handover Staff Survey, which was developed by the research team at Deakin–Southern Health Nursing Research Centre. Each interview was recorded via tape-recorder and transcribed verbatim. Interviewees were offered the opportunity to review their transcripts to verify accuracy.

Results

The results presented within the data yielded 5 common themes: accountability, role of the patient, time, confidentiality, and preference. These themes were discovered through word clustering, bulleting, highlighting, and comparison of interviews. Answers to some of the questions were unanimous, however, not necessarily proving a theme. A theme may be described as clustered, analogous ideas. Interviews were performed; all interviewees were female with a median of 9.5 years of experience as a registered nurse. Participants had from 2.5 years to 16 years of experience as a nurse.

Nurses Make the Pediatric Intensive Care Unit Go Round: Nurse Presentation on Daily Rounds

Jeannine Rockefeller, E. Vincent Faustino, Kim Trotta; Yale New Haven Hospital, New Haven, CT

Purpose

Daily work rounds in pediatric intensive care unit are fraught with inaccuracy and incompleteness of data presented, which may lead to the formulation of inappropriate care plans for patients. The most current and most accurate information is essential in caring for these patients. The purpose of this study was to determine whether adding a scripted nurse presentation to daily rounds would improve accuracy and completeness of data presented on daily multidisciplinary rounds.

Background/significance

Information presented on rounds may not be the most current. Within constraints of patient care orders, nurses adjust various therapies on a momentary basis in response to changes in patients’ condition. Nurses are responsible for real-time management of indwelling catheters and monitoring devices. These responsibilities allow nurses to have the most current information available. By using nurses to present a portion of rounds, a more accurate representation of patients’ status can be presented.

Method

Nurses were required to present on rounds using a script developed to facilitate nursing presentation. The script included areas vital to daily planning such as catheters, indwelling catheters, pain control, response to pain medication, rate and dose of continuous intravenous medications. Social, skin, and care coordination issues, need for physical or occupational therapy, and restraint status were also included. Assessments were done before and 6 months after nursing presentation was started. Assessment tools included direct observation of rounds for accuracy of presentation and a nursing survey targeting nurses’ perceptions of accuracy and completeness of data presented before and after implementation.

Results

Direct observations of daily work rounds showed that data presented were accurate in 45% (17/38) of observations before the intervention and 67% (26/39) after the intervention (P=.07). Results of the nursing perception survey before and after intervention were as follows: “I am comfortable correcting data presented when it is not accurate”: 97% agreed (37/38) before and 92% agreed (36/39) after (P=.10). The following issues were discussed during rounds (% of nurses who agree): skin integrity, 18% (7/38) before and 54% (21/39) after (P = .007); rehabilitation/care coordination services, 26% (10/38) before and 26% (10/39) after (P=.40); restraints, 39% (15/38) before and 49% (19/39) after (P=.20); need for sitter, 16% (6/38) before and 46% (18/39) after (P=.002).

RS38 Nurses Uninterrupted Passing Medications Safely (NUPASS) Study

Julie Evanish, Frances Flynn, Dawn Hutchinson; Advocate Christ Medical Center, Oak Lawn, IL

Purpose

To determine if implementing best practice guidelines to limit interruptions during the medication administration process would result in a significant decrease in interruptions and nursing medication errors.

Background/significance

Medication errors are costly, increasing length of stay, and can be fatal for patients. Nurses provide the final safety net to prevent medication errors from affecting patients. Many studies report that interruptions are one of the most common causes of nursing medication errors. Recommendations from current studies suggest the need for nurses to implement evidence-based interventions to avoid interruptions during medication administration to improve patient safety.

Method

A quasi-experimental design was used to measure medication accuracy and interruptions before and after implementation of Medication Time-Out Guidelines. The guidelines included use of neon safety belts, docking of phones, and several tools to manage communication safely while limiting interruptions during the medication pass. Naive observations and retrospective chart review were conducted before and after guideline implementation on 2 telemetry study units and a third telemetry comparison unit that did not receive education about the guidelines. The participants were a convenience sample of registered nurses. Change in interruptions and medication errors were tested by using Student t tests.

Results

A total of 631 naive observations were made during the study (316 at baseline and 315 at repeat observation). Overall, interruptions occurred on the units 15.69% of the time. The mean number of medications per pass were 4.55 (before) and 4.71 (after). One study unit showed a statistically significant decrease in interruptions (23% to 4%, P< .001) and medication errors (11% to 3%, P=.02). The second study unit did not have a statistically significant decrease in interruptions or medication errors. Interruptions in the comparative unit did not change significantly; however, a statically significant decrease in medication errors occurred that was unrelated to interruptions.

RS39 Occcurrence of Secondary Injuries After Traumatic Brain Injury in Patients Transported by Critical Care Air Transport Teams

Susan Dukes, Meg Johantgen, Elizabeth Bridges; United States Air Force, Wright-Patterson AFB, OH

Purpose

In patients with isolated traumatic brain injury (TBI) who were transported by critical care air transport teams (CCATTs), the specific aims of this study were to (1) describe the occurrence of secondary injuries (SIs, eg, hypoxia, hypotension, hyperthermia, hypoxia, hypothermia, and hyperglycemia) and (2) determine if the occurrence of SIs was associated with severity of injury, mechanism of injury, type of aircraft used for transport, and year of injury.

Background/significance

TBI is considered a signature injury of current military operations. Patients who survive the primary trauma are susceptible to SI of the injured brain. SIs are associated with worse short and long-term outcomes. More than one-third of CCATT patients have had a TBI. CCATTs transport severely ill and injured patients on aeromedical evacuation flights through the enroute care system, passing through multiple hospitals and undergoing flights lasting 8 to 14 hours onboard military cargo aircraft.

Method

A retrospective cohort study was conducted to describe the occurrence of SIs in 63 combat casualties with severe isolated TBI who were transported by CCATTs from 2003 through 2006. Data were obtained from the Wartime Critical Care Air Transport Database, which describes the patient’s physiological state and care during transport from the area of responsibility (Iraq/Afghanistan) to Germany and the United States. Descriptive statistics were used to analyze demographic data and the occurrence of SIs. A logistic regression model was used to analyze the binary occurrence of each SI with the independent variables of severity of injury, mechanism of TBI, type of aircraft, and year of occurrence.

Results

Fifty-three percent of the patients had at least 1 documented episode of a SI. Hyperthermia was the most common SI and was associated with severity of injury. The hyperthermia rate increased across the continuum but was not associated with administration of blood products. Hypoxia occurred most often within the area of responsibility, but was rare during CCATT flights. The mean time from injury to arrival in Germany was 2.3 days, with the median decreasing from 2.5 days in 2003 to 1 day in 2005 and 2006. The mean time from point of injury to arrival in the United States was 6.8 days, with the median days from injury to US arrival decreasing from 8 days in 2003 and 2004 to 3.5 days in 2006.

RS41 Perceptions Related to Falls and Fall Prevention Among Acutely Ill Adult Inpatients

Kathryn Renee Twibell, Debra Siela, Terrie Sproat; Ball State University, Indiana University Health, Muncie, IN

Purpose

To explore fall-related perceptions among acutely ill adult inpatients. Based on the framework for fall-related cognitive appraisals developed by Yardley et al, this study examined inpatients’ perceptions of their likelihood of falling, self-confidence in mobilizing without assistance, and intention to follow a fall prevention plan. A second purpose of the study was to examine the reliability and validity of survey items that measured inpatients’ fall-related perceptions.

Background/significance

Falls are the most common adverse event among acutely ill patients, contributing to increased suffering and economic cost. Despite extensive evidence on risk factors for falling and interventions to reduce falls, falls remain a serious safety threat. Missing from fall-related research are explorations of inpatients’ perceptions about falls and intentions to follow fall-reduction plans. Nurses need new knowledge about patients’ perspectives in order to fully engage patients in fall-reduction plans.

Method

In this descriptive correlational study, 158 acutely ill patients completed the instrumentation. Participants were mentally alert, at risk for falls, and hospitalized for acute illness or surgery. Items from 6 existing surveys that measured fall-related perceptions in non-hospitalized adults were adapted for inpatients. A new scale was constructed to measure intention to engage in fall-reduction plans. The research team collected data at the patients’ bedsides. The number of falls during hospitalization was tracked prospectively. Data analysis included descriptive statistics, internal consistency reliability, factor analysis, and correlations appropriate to the level of data.

Results

Two-thirds of the sample were females admitted for cardiac events or musculoskeletal surgery. All participants were at risk for falls, yet more than one-third perceived no likelihood that they would fall while hospitalized and did not think they would be injured, even if they did fall. Two-thirds reported no fear of falling. Inpatients who were more confident of their ability to mobilize safely without help and less afraid of falling reported less intent to follow fall-reduction plans (P<.01). Reliability was high for all multi-item scales. Construct and/or criterion-related validity were supported for all scales and single items. Following enrollment in the study, no patients fell.

RS42 Predictive Value of the Bispectral Index (BIS) for Burst Suppression on Diagnostic Elecroencephalograms During Drug-Induced Coma

Richard Arbour; Einstein Medical Center, Philadelphia, PA

Purpose

To assess correlation and predictive value between data obtained with the bispectral index (BIS) and diagnostic electroencephalography (EEG) in determining degree of burst suppression during drug-induced coma. This study seeks to answer the question: “To what degree can EEG suppression and burst count as measured by diagnostic EEG during drug-induced coma be predicted from data obtained from the bispectral index such as BIS value, suppression ratio (SR), and burst count?”

Background/significance

During drug-induced coma, cortical EEG is the gold standard for real-time monitoring and drug titration. Diagnostic EEG is, from setup through data analysis, labor intensive and costly, and it is difficult to maintain uniform competency with different clinicians. BIS monitoring is less expensive and less labor-intensive, and it is easier to interpret BIS data and to establish and maintain competency in BIS monitoring. Validating BIS data against diagnostic EEG facilitates effective brain monitoring during drug-induced coma at lower cost with similar outcomes.

Method

Four consecutive patients receiving drug-induced coma/EEG monitoring were enrolled in this prospective, observational cohort study. BIS was started after patients provided informed consent. Variables recorded each minute included presence or absence of EEG burst suppression, burst count, BIS value over time, burst count, and suppression ratio (SR). Pearson product-moment and Spearman rank coefficients for BIS value and SR versus burst count were determined. Regression analysis was used to plot BIS values against bursts per minute on EEG and to plot SR against burst count on EEG. EEG/BIS data was collected by review of digital data files and transcribed onto data collection sheets at corresponding time indices.

Results

Four patients yielded 1972 data sets in 33 hours of EEG and BIS monitoring. The regression coefficient of 0.6673 shows robust predictive value between EEG burst count and BIS SR. The Spearman rank coefficient of −0.8727 indicates a strong inverse correlation between EEG burst count and BIS SR. The Pearson correlation coefficient between EEG versus BIS burst count was 0.8256, indicating a strong positive correlation. A Spearman coefficient of 0.6819 showed strong correlation between BIS value and EEG burst count. The small number of patients (n=4) limits available statistics and the ability to generalize results. Graphs and statistics show strong correlation/predictive value for BIS parameters and EEG suppression.

RS43 Preintubation Chlorhexidine Does Not Reduce Endotracheal Tube Colonization

Cindy Munro, R. K. Elswick, Mary Jo Grap, Curtis Sessler; University of South Florida, Tampa, FL

Purpose

As part of a larger randomized clinical trial, we tested the effect of a preintubation oral application of chlorhexidine on early endotracheal tube (ETT) colonization in adults undergoing mechanical ventilation. We hypothesized that patients in the preintubation intervention group would have less bacterial colonization of ETTs at extubation than would patients in the control group, who did not receive the preintubation intervention. Both groups received routine chlorhexidine beginning 12 hours after intubation.

Background/significance

Colonization of the endotracheal tube may contribute to development of VAP, but systematic evaluation of endotracheal tube colonization shortly after intubation has been limited. Chlorhexidine is a broad-spectrum antibacterial agent that is widely used in oral care of intubated adults. The effect of chlorhexidine on endotracheal tube colonization has not been reported, nor has preintubation application been tested outside of elective intubation.

Method

Participants in this randomized controlled trial were assigned to preintubation intervention (oral application of 5 ml chlorhexidine gluconate 0.12% solution before intubation) or to the control group (no preintubation intervention). All patients received chlorhexidine solution twice a day, beginning 12 hours after intubation. A sample was taken by swab from the interior lumen of ETTs in a subset of subjects from whom the ETT could be obtained at extubation. Potentially pathogenic species were identified by semiquantitative culture, and results were collapsed into 2 categories: colonization (moderate or many organisms) or no colonization. Logistic regression analysis was performed with terms for group, intubation length, and 2-way interaction.

Results

ETTs from 83 subjects were retrieved (43 chlorhexidine group, 40 control group). Subjects were 54% African American and 46% white; 59% were male, and 55% were urgently intubated. Mean score on the Acute Physiology and Chronic Health Evaluation II was 77.7 (SD, 26.0). A majority (55%) were urgently intubated, and 74% were not receiving antibiotics at the time of intubation. Mean time to extubation was 3.5 days (SD, 3.2), and median length of ICU stay was 7.2 days (range, <1 day to 79 days). A majority of ETTs in both study groups were not colonized at the time of extubation (81.4% chlorhexidine, 82.5% control). ETT colonization did not differ significantly between the groups.

RS44 Pressure, Oxygenation, and Perfusion Deficits as a Model to Predict Pressure Ulcers in the Intensive Care Unit

Deborah Bly, Marilyn Schallom, W. Dean Klinkenberg, Carrie Sona; Barnes-Jewish Hospital, St. Louis, MO

Purpose

To better understand and identify the risk factors associated with skin injury related to alterations in pressure, oxygenation, and perfusion variables in critically ill adults.

Background/significance

The risk assessment instrument currently used by the hospital is the Braden Scale. The Braden scale is scored on hospital admittance and during each shift. Weekly pressure ulcer (PU) rounds are conducted to detect unit-acquired PUs. Most patients (97%) in the 19-bed medical intensive care unit (MICU) have a Braden score less than 18. This indicates that almost all of our patients are at risk; however, not all get a pressure-related injury. Therefore, a new instrument for risk assessment is needed.

Method

A retrospective chart review was conducted on a sample set identified by weekly PU rounds’ data sheets from January 2010 thru October 2010. Only 1 week was used per month. Records were verified to ensure that all ICU admissions for the same hospital admission were correctly identified. A total of 161 patients with 209 ICU admissions were reviewed for variables in each category. One nurse extracted all of the information from the electronic medical record while a second nurse conducted random audits for accuracy. For the initial data set, all variables were analyzed with bivariate analysis with unit-acquired PU results.

Results

Data was retrievable on all patients and admissions. Several variables initially screened had a high amount of missing data and were not analyzed, including cardiac output/index and central venous oxygen saturation. The following variables were significant at a P less than .01 per category: (1) Pressure: admission body mass index (lower index=higher risk); rectal diversion device, presence of a feeding tube or endotracheal tube; any transports off the floor; longer stay in the unit; (2) Oxygen: ratio of PaO2 to fraction of inspired oxygen less than 200; oxygen saturation shown by pulse oximetry less than 90%; inhaled dilators used; (3) Perfusion: mean arterial pressure less than 60 mm Hg; diastolic blood pressure less than 50 mm Hg; systolic blood pressure less than 90 mm Hg; body temperature less than 36°C or greater than 38°C; need for continuous venovenous hemodialysis; any requirement for a pressor. Lower albumin scores were also significant.

RS45 Randomized Controlled Trial of Differences in Artifact/Noise in Disposable versus Reusable Electrocardiography Lead Wires

Nancy Albert, Joel Roach, Ellen Slifcak, Terri Murray, James Bena, Jackie Spence; Cleveland Clinic, Cleveland, OH

Purpose

To examine if differences exist in the frequency of electrocardiographic (ECG) artifact/noise events in disposable (Kendall DL with the patented push-button design) vs reusable ECG lead wires (LWs) during delivery of patient care on four 24-bed cardiothoracic telemetry units. ECG artifact/noise events cause fragmentation and interruptions in nurses’ delivery of care and patients’ rest, and are distressing, especially when audible alarms are loud, prolonged, or unexpected.

Background/significance

Disposable products are increasingly prevalent in hospital care to reduce hospital-acquired infections. Disposable ECG LWs designed for hospitalized patients are available, but it is unknown if the quality, based on ECG artifact/noise events, differs from reusable ECG LWs. Durability of disposable ECG LWs, designed for single patient use, may be superior, equivalent, or inferior to reusable ECG LWs that were designed to be cleaned and reused many times.

Method

Via random assignment, 2 units used reusabble ECG LWs (usual care) and 2 units alternated monthly between disposable and reusable ECG LWs for 4 months. A remote monitoring team, blinded to ECG LW type, assessed the frequency of artifact/noise events per standard procedures. Patient-related factors were collected from hospital databases. Event rates were described by using total counts and rate per 100 patient days. Between groups, event rates were compared by using generalized linear mixed effect models. Tests of differences and noninferiority between LW types were performed. For patient-related factors, mixed and regression models were created and comparisons were weighted by patients’ length of stay.

Results

In 1611patients (2330 admissions) and 9385.5 patient days of ECG monitoring (disposable LWs: 4956.5 days; reusable LWs: 4429 days), patient-related factors were similar between groups. “No telemetry/LW failure/LW off” event rates were lower in the disposable LW group; No. (rate per 100): disposable, 764 (29.8) vs reusable, 2791 (40.9); adjusted relative risk [RR] (95% CI): 0.71 (0.53–0.96); noninferiority P<.001; superiority P=.03. No between-group differences were found in “false crisis alarms.” Disposable LWs were noninferior to reusable LWs for “all negative alarm” event rates and trended toward superiority: No. (rate per 100): disposable, 2029 (79.1) vs reusable, 6673 (97.9), adjusted RR (95% CI): 0.81 (0.63–1.06), P=.002; superiority P=.12.

RS47 Self-Efficacy and Self-Care Management of Older Persons With Heart Failure

Susan Simms, Elizabeth Schlenk; University of Pittsburgh School of Nursing, Pittsburgh, PA

Purpose

To examine the role of self-efficacy in heart failure (HF) self-care management of fluid weight gain and dietary salt intake in patients with HF. The specific aims were (1) pretest the Diet Habit Survey (DHS) salt subscale, (2) describe HF self-care self-efficacy, (3) describe self-care management behaviors, and (4) examine the relationship between self-efficacy and self-care management of preventing fluid weight gain and restricting salt intake.

Background/significance

About 6 million Americans have HF. Self-care management may prevent many of the 1.1 million hospital admissions for HF. Salt restriction nonadherence, poor weight monitoring, and unrecognized symptoms of worsening HF are common causes of readmissions. Bandura’s self-efficacy theory posits a relationship between cognition, behavior, and environment. Self-efficacy, a belief in the ability to perform behaviors to achieve desired outcomes, relates to self-care management.

Method

This convenience sample of university-affiliated HF clinic patients were a mean of 50 years old, able to read and write English, able to care for themselves, and self-reported having had a diagnosis of HF for 3 months. The DHS salt subscale was evaluated by using the Revised HF Self-Care Behavioral Scale (RHF) and Self-Care of HF Index (SCHFI subscales: self-care maintenance, self-care management, and self-care self-confidence). Exploratory data analysis, with internal consistency reliability indicated by Cronbach α and Pearson r used to examine convergent validity and the relationship between self-efficacy and self-care management, was performed. Significance was set at .05.

Results

The sample included 21 men (70%), mean age was 64.1 (SD, 9.0) years, had a mean (SD) of 12.8 (2.9) years of education, and were mostly married (70%, n=21), and white (86.7%, n=26). The mean (SD) score on the DHS salt subscale was 18.0 (3.4) with a Cronbach α of 0.086. There were no significant correlations between the DHS salt subscale and RHF overall score (r=0.122), or scores on the subscales for self-care maintenance (r=0.088), self-care management (r=−0.068), and self-care confidence (r=0.076). The mean (SD) score on the RHF was 101.1 (20.9). The mean (SD) scores on the SCHFI were 79.5 (11.4) for self-care maintenance, 17.1 (4.1) for self-care management, and 44.5 (11.2) for self-care confidence. Self-care confidence was significantly correlated with RHF (r=0.647) and self-care management (r=0.794), but not self-care maintenance (r=0.290).

RS48 Simulated Hands-on Practice Increases Cardioversion, Pacing, and Defibrillation Skills

David Schmidt; The Christ Hospital, Cincinnati, OH

Purpose

Nurses certified in Advanced Cardiovascular Life Support (ACLS) must perform quick and accurate interventions when a patient has a life-threatening event. This study evaluated the self-confidence (SC), ability, and speed in performing cardioversion, transcutaneous pacing, and defibrillation (CPD) before and after a practice intervention. A convenience sample of 10 nurses from the intensive care unit and 11 telemetry nurses was recruited to participate in a study to determine if practice interventions every 6 months improved SC and skills in these essential ACLS tasks.

Background/significance

Previously, I examined SC and ability to perform CPD skills in 55 ACLS-trained nurses who had had a critical care orientation 6, 12, or 18 months earlier. Confidence was extremely low and only 14.5% could perform all 3 skills. Those 3 different cohorts showed no significant difference (P<.05) on any variable. These disappointing results led to the development of a brief training intervention to determine if this would improve performance. Only the 6-month group was included in this follow-up study.

Method

A single-group pre-post test design was used. Baseline data were obtained 6 months after orientation. After consenting, participants completed a demographic and SC survey created by the investigator. SC in ability to perform CPD was rated on a scale of 0 (not) to 4 (very) for each task. A script was used to present the scenario for cardioversion, pacing, and defibrillation in that order. Subjects were instructed to focus only on how to manually perform the skill operating a Lifepak 12. The monthly individual intervention was a hands-on practice simulation conducted at their convenience during scheduled work. Participants were retested in the same manner after the intervention.

Results

Mean SC scores for the 3 CPD skills at baseline were 0.90 for cardioversion, 1.10 for pacing, and 1.33 for tdefibrillation. After the 6 monthly practice intervention, SC scores increased to 2.33, 1.81, and 2.29, respectively. Ability to perform all 3 tasks increased 4.8% to 81%. Cardioversion was correctly performed by 57% at baseline and by 95% after the intervention. Pacing success increased from 14% to 86% while defibrillation success increased from 86% to 100%. The duration of time to successful completion decreased for 3 tasks: cardioversion decreased from 49 to 17 seconds, pacing was reduced from 80 to 35 seconds, and defibrillation from 20 to 10 seconds.

RS49 Staff Nurses’ Perspectives of Workplace Incivility

Patricia Lewis; Methodist Sugar Land Hospital, Sugar Land, TX

Purpose

Workplace incivility (WPI) is defined as “low-intensity deviant behavior with ambiguous intent to harm the target, in violation of workplace norms for mutual respect.” The purpose of this poster is to (1) determine the relationships between individual and organizational factors and WPI, (2) evaluate the impact of WPI on costs and productivity, and (3) determine if there is a difference between healthy and standard work environment and WPI.

Background/significance

Workplace violence crosses the spectrum from low-level nonphysical workplace violence to physical violence. The more insidious forms of workplace violence, like WPI, can have long-lasting effects on an organization. Recently, there has been an interest in WPI because of the evolving understanding of the importance of creating and sustaining a healthy work environment. WPI usually occurs under the radar, is thought to be benign, and frequently is not apparent to the leaders of the organization.

Method

This nonexperimental study of more than 659 staff nurses in Texas was conducted in 2009. This study is based on the conceptual model of workplace incivility. Approval was obtained from the institutional review board. The instruments include the Nursing Incivility Scale and the Work Limitations Questionnaire. The Nursing Incivility Scale has 5 subscales: general environment, nurse, supervisor, physician, and patient/visitor. The Work Limitation Questionnaire has 4 subscales: time management, physical, mental/interpersonal, and output.

Results

Experience with WPI in the past year: 85% (n=553). WPI scores differed between healthy and standard work environments. WPI and productivity had a negative relationship. Lost productivity was calculated at $11 581 per nurse per year. A negative relationship was found between the nurses’ perception of their manager’s ability to handle WPI in all subscales except patient/visitor. The intensive care unit and the medical/surgical setting had lower WPI scores than did the operating room (P<.001). Novice nurses experienced lower WPI scores than nurses with more than 3 years of experience on all subscales except the patient/visitor.

RS50 The Bilateral Bispectral Index (BIS): A New Approach for the Detection of Pain in Patients With a Traumatic Brain Injury

Caroline Arbour, Celine Gelinas, Melody Ross, Tarek Razek, Patricia Bourgault, Carmen Loiselle, Ashvini Gursahaney, Colleen Stone, Manon Choiniere; McGill University, Montreal, Quebec

Purpose

In the intensive care unit (ICU), behavioral reactions of patients with a traumatic brain injury (TBI) are often blurred by sedation and altered level of consciousness. As such, assessing pain in nonverbal TBI patients is challenging. Although alternative pain measures are recommended in individuals unable to self-report, very few have been explored in brain-injured patients. This study described fluctuations in the bilateral Bispectral Index (BIS) in nonverbal TBI patients exposed to common procedures.

Background/significance

Increases in BIS values (a 0–100 electroencephalography-based parameter) were observed in non–brain-injured ICU patients with altered level of consciousness when exposed to procedures known to be painful. Based on such findings, the BIS could potentially be useful for pain detection in nonverbal critically ill patients. Of note, lateralization of brain activity often occurs in critically ill patients with a TBI. Given its capacity to monitor both hemispheres separately, the new bilateral BIS could be more useful for the detection of pain in TBI patients.

Method

Twenty-five TBI patients (17 males and 8 females) from a level I trauma ICU participated. Bilateral BIS values (BIS-L, left hemisphere; BIS-R, right hemisphere) were recorded with a BIS VISTA monitor 1 minute before (baseline) and during 2 empirically tested procedures: one that was nociceptive (ie, turning) and one that was not nociceptive (ie, noninvasive blood pressure [NIBP]). The electromyographic (EMG) activity and the signal quality index (SQI) were recorded to assess for artifacts in the BIS signal. Information about TBI patients’ level of consciousness (Glasgow Coma Scale [GCS]), brain lesion severity, and localization of injury was documented. Descriptive statistics were calculated and t tests were done for all variables.

Results

Participants were a mean of 53.8 years old, were hospitalized for moderate to severe TBI (96.0%), and had a GCS score from 9 to 12. Participants had mean baseline values of 56.1% (BIS-L) and 52.7% (BIS-R), indicating deep sedation. Compared with baseline, BIS-L (+7.2%) and BIS-R (+6.4%) increased significantly (P<.05) during turning, whereas BIS-L (−0.5%) and BIS-R (+1.4%) remained stable during NIBP measurement. Of note, patients with right-sided TBI showed higher increases in BIS-L (+6.2%; P<.05), and those with left-sided TBI showed higher increases in BIS-R (+13.8%; P<.05) during turning. On average, EMG was 36.3 dB and SQI was 85.7 during the procedures, indicating no artifacts in BIS signal.

RS51 The Clock Is Ticking: Increasing Nurses’ Satisfaction with Computer Documentation One Keystroke at a Time

Melinda Heath, Pamela Beller; Aultman Hospital, Canton, OH

Purpose

Many nurses find electronic documentation cumbersome and time-consuming, taking valuable time away from direct patient care. Lack of education on system functionality (short cuts) may contribute to these issues. This study was successful in demonstrating that nurses’ satisfaction increases and documentation time for nursing decreases after nurses receive education on the electronic documentation system’s “bells and whistles” (short cuts).

Background/significance

A literature review confirmed that the use of electronic charting is cumbersome and time-consuming for nurses, who are the primary record-keeping sources for patients. In 2010, Stevenson, Nilsson, Peterson, and Johansson reported that nurses find that electronic patient records are not user-friendly. Other factors obstructing the use of electronic charting are nurses’ lack of knowledge of the how to set personal preferences on the new system.

Method

This study used surveys both before and after the training, 4 one-on-one education sessions, and time studies before and after the training. The study population consisted of the staff from a critical care step-down unit. Data collection on time required for nursing documentation included time spent on making report, prioritizing assessment, and documenting first vitals of the shift. The education sessions included training on how to implement personal preferences in the documentation system.

Results

Results for the time studies were evaluated by using a Wilcoxon signed rank test. Except for logging on, which was used as a constant, all the times to document were significantly shorter after the training. The survey results used the Wilcoxon signed rank test to compare the responses from the before-training group with responses from the after-training group. The intervention results showed a statistically significant increase in nursing satisfaction.

RS53 The Impact of Simulation Training on Self-confidence in Facilitating Family Presence During Emergency Procedures

Cathy Hiler, Emily Turner; Carilion Roanoke Memorial Hospital, Roanoke, VA

Purpose

To explore the impact of education and simulation training on nurses’ confidence in facilitating family presence during resuscitation. The hypothesis was that staff participating in this training regarding family presence would develop confidence in working with family members at the bedside during resuscitation.

Background/significance

In concert with the AACN’s mission and standards to provide family-centered care, there is increasing emphasis on supporting families of adult patients during emergency situations. Research indicates that health care providers often do not encourage family presence. Multiple factors influence this, but staff cite lack of knowledge on how to work with families in this situation as a factor. Simulation training allows nurses to develop and practice communication with families during these difficult times.

Method

Registered nurses from the coronary care unit participated in didactic and simulation training regarding family presence at the bedside during resuscitation. These nurses were invited to participate in an online Family Presence Self-confidence Scale (FPS-CS) survey duplicated with permission from Twibell et al. Simulation training and online education was required for each nurse. Staff participated in a brief simulation where they each practiced specific communication strategies outlined in “Presenting the option for family presence.” Participation in the online survey before and after the training was voluntary. Following training, nurses participated in debriefing and reflected on the process.

Results

A total of 17 nurses participated in the survey before the training and 18 after the simulation training. Responses indicated that nurses found the simulation training beneficial in developing confidence in facilitating family presence during resuscitation. Mean scores on the question that addressed communicating about resuscitation measures increased from 4.17 to 4.53 and mean scores increased from 3.94 to 4.53 on the question about being able to provide comfort measures. Overall, each of the communication questions showed an increase in mean scores after the simulation training.

RS54 Use of Thermoregulation Head Wrap to Facilitate Rewarming of Infants Undergoing Cardiopulmonary Bypass Surgery

Karen Sakakeeny, Michele Degrazia; Boston Children’s Hospital, Boston, MA

Purpose

To determine the safety and feasibility of using the thermoregulation head wrap, a newly designed device, on infants during the rewarming period of cardiopulmonary bypass (CPB) surgery.

Background/significance

CPB patients are cooled to decrease metabolism and protect myocardium and brain. When the procedure is completed, the temperature of blood in the bypass pump is gradually increased and the patient is rewarmed. After separation from the pump, infants can experience a decrease in body temperature. Current standards do not include any type of head covering to rewarm. In this study, we tested a new head covering made of biaxial-oriented polyethylene terephthalate known as Mylar to support rewarming.

Method

In this phase I descriptive pilot study, we tested the safety and feasibility of a new thermoregulation head wrap on a sample of 10 infants undergoing CBP surgery. To describe the feasibility of the thermoregulation head wrap, the infant’s medical team completed a questionnaire on ease of use. To characterize the temperature progression from the onset of rewarming to arrival in the intensive care unit (ICU), interval body temperatures were recorded in real time. Also, to identify and describe adverse events, nurses recorded preoperative and postoperative interval skin assessments. The study population and outcome measures were analyzed by using descriptive statistics.

Results

Health care providers reported that the thermoregulation head wrap was easily applied to the infant’s head at the start of rewarming and was removed without difficulty upon arrival in the ICU. Infants experienced a steady increase in body temperature from (a) the onset of rewarming (28°C), to (b) removal of bypass cannulas (28.9°C), to (c) removal of the rectal temperature probe before transfer from the operating room to the CICU (34.5°C), and (d) upon arrival in the ICU (36.0°C). Furthermore, no adverse events were observed during the course of the investigation.

RS55 Using Severity-of-Illness Scores as Part of an Educational Program for Critical Care Nurses at a County Teaching Hospital

Jovie De Leon-Luck, Adrian Smith; Alameda County Medical Center, Oakland, CA

Purpose

To determine if education of critical care nurses on the use of the Acute Physiology and Chronic Health Evaluation (APACHE) score improves plan of care and patients’ outcomes in the intensive care unit (ICU). The ICU clinical nurse specialist and the critical care outcomes nurse analyst are involved in an ongoing study at a county teaching hospital, using APACHE scores to benchmark mortality rates and length of stay (LOS) in days against national data.

Background/significance

The APACHE IV is a severity-of-illness ICU scoring system, with higher scores corresponding to more severe disease and a higher risk of death. An educational program that will teach ICU nurses how to use these scores can help improve their critical thinking skills. Such training can help ICU nurses center their care on the patient’s severity of illness and target areas that require immediate attention and close monitoring. The efficiency of nursing manpower and allocation of resources can also be monitored.

Method

The most recent APACHE IV version is being used to collect data. The data are abstracted retrospectively from all ICU patients’ charts by 3 ICU nurses and manually entered into the APACHE calculator, a computer-based program that will automatically give a score to the patient. Interrater reliability is maintained by random audits of 10% of the patient population by using a double abstraction form. The point score is calculated from 12 routine physiological measurements during the first 24 hours after admission and is not recalculated during the stay—it is by definition an admission score. The resulting point score is always interpreted in relation to the illness of the patient.

Results

Hospital data obtained from January 2011 to June 2012 are presented. Total number patients was 1179 for 2011 and 593 for the first half of 2012. The mean APACHE score was 54 in 2011 (national mean, 50) and 52 in January to June 2012 (national mean, 50). Actual mortality rate was 11.81% for 2011 (national rate, 16.51%) and 8.74% in January to June 2012 (national rate, 11.65%). The top admitting diagnosis for both periods was head trauma. The mean LOS was 4.37 days (national mean, 4.45 days) in 2011 and 4.48 days (national mean, 4.42 days) in January to June 2012. The higher APACHE scores may be indicative of the higher acuity of patients in this level II trauma hospital. The lower mortality rates may be reflective of a proactive ICU team. LOS was not much different.

RS56 Utilization of the Kreg Tilt Bed to Promote Mobilization in Intubated Patients in the Medical Intensive Cardiac Unit

Deborah Duey; Advocate Christ Medical Center, Oak Lawn, IL

Purpose

An innovative Kreg tilt bed was used to push beyond conventional boundaries associated with early mobility of patients undergoing mechanical ventilation. The purpose of this study was to evaluate the use of the Kreg tilt bed for early mobility in the intensive care unit (ICU) patients undergoing mechanical ventilation.

Background/significance

Tilting ICU patients who are undergoing mechanical ventilation into an upright position is creative, safe, feasible, and may prevent immobility complications without the barriers associated with ambulation of such patients. Mobility protocols with early physical therapy (PT) have shown reduced ICU length of stay (LOS), yet little research has been done using bed tilting for mobility in the ICU. Driven by patients’ need, we desired to use tilting to improve the activity level in our ICU.

Method

We used a case control study in the medical ICU. Patients in whom the tilt protocol was used were adults receiving mechanical ventilation who were 18 years and older, with a fraction of inspired oxygen of 0.60 or less, a positive end-expiratory pressure of 10 cm H2O or less, oxygen saturation of 88% or higher, and who had previously been ambulatory. Twice daily 20-minute tilting with 15° daily increases were done to a final tilt goal of 80° by day 4. PT provided daily exercise, and rate of perceived exertion (RPE) was recorded. Adverse events were measured. The study patients were compared with a retrospective control group case matched by age, sex, and medical diagnosis. Outcome comparisons between groups were tested with independent t tests.

Results

Sixty participants were enrolled, 30 in the control group and 30 in the protocol group. The mean age was 70 years, 12% were male and the mean score on the Acute Physiology and Chronic Health Evaluation was 84 for the protocol group and 83 for the control group. The groups did not differ significantly in demographic characteristics. Of the 197 tilts, 30° was the most tolerated, and 83% of the subjects were able to bear weight for 20 minutes. More than 90% of tilts resulted in no adverse events; knee buckling was the most common (3% of the time). The groups did not differ significantly in discharge disposition, ICU LOS, or duration of mechanical ventilation. The RPE improved significantly for the study group, from a mean of 1.46 to 4.5 (P<.001).

RS57 Validation of the Critical-Care Pain Observation Tool in Adult Intensive Care Patients

Laurel Stocks, Sherill Cronin, Virginia Keal, Cheryl Stout, Paul Buttes; Baptist Hospital East, Louisville, KY

Purpose

To examine the reliability and validity of the Critical-Care Pain Observation Tool (CPOT) in a general population of adult, critically ill patients.

Background/significance

Effective management of pain begins with accurate assessment of its presence and severity, which is difficult in critically ill patients. The Faces, Legs, Activity, Cry, and Consolability (FLACC) scale was designed to measure postoperative pain in children under the age of 7 years is our current tool. The Critical-Care Pain Observation Tool (CPOT) was developed to evaluate behaviors associated with pain, however this tool has been validated only in cardiac surgical populations.

Method

Using a convenience sample of 75 noncardiac surgical patients from critical care units of a community hospital who met the inclusion criteria, pain was evaluated 3 times, before repositioning (a nociceptive procedure), during repositioning, and after repositioning by 2 evaluators using 3 different tools. The FLACC scale, the CPOT, and the Pain Intensity Numeric Rating Scale (NRS), a scale from 0 to 10 (0=no pain present and 10=worst possible pain) were compared. The NRS is presently the standard measurement of pain in cognitively intact individuals.

Results

Reliability and validity of the CPOT were acceptable. Interrater reliability was supported by strong intraclass correlations (range, 0.74–0.91). For criterion-related validity, significant associations were found between CPOT scores and both FLACC (0.87–0.92) and NRS (0.50–0.69) scores. Discriminant validity was supported by significantly higher scores during repositioning (mean, 1.85) versus at rest (mean before, 0.60; mean after, 0.65).

Footnotes

Presented at the AACN National Teaching Institute in Boston, Massachusetts, May 18–23, 2013.