RS1 Assessing Health-Related Quality of Life in Nonsurgical Elderly Patients With Aortic Stenosis
Kristin Sandau, Wes Pedersen; Bethel University and the Minneapolis Heart Institute Foundation, St Paul, MN
Purpose
High-risk elderly patients receiving nonsurgical treatment for aortic stenosis (AS) are rarely studied. This study’s purpose was to measure Health-Related Quality of Life (HRQL) and test reliability of the Minnesota Living with Heart Failure questionnaire (MLHFQ), Geriatric Depression Scale (GDS), and Functional Assessment of Chronic Illness Therapy-Spiritual Well-being Scale (FACIT-Sp) in persons more than 70 years old receiving nonsurgical treatment for AS.
Background/significance
Past studies of HRQL in AS focused on valve replacement and often used 1 generic measure. No disease-specific measure for HRQL in AS exists. Among patients for whom surgery is not an option, HRQL measurement helps identify factors most concerning to patients. AACN’s values include collaboration and advocacy for patients during the most vulnerable times in their lives. HRQL evaluation allows patients to communicate their unique values and concerns, with potential for nurse-led interventions.
Method
Questionnaires were administered to 25 consecutive patients aged 75 to 97 years (mean, 85 years) enrolled in a clinical trial for AS. This descriptive study used the MLHFQ to measure physical, emotional, and overall HRQL; the GDS for depressive symptoms; and the FACIT-Sp for spiritual well-being. Cronbach α was used to determine internal consistency reliability. Four additional investigator-designed items were used to measure angina, light-headedness, and respective effect of each symptom on quality of life.
Results
Mean aortic valve area was 0.54 (range 0.37–0.96); 93.6% reported exercise limitation. Many patients reported angina (52%) and light-headedness (72%). Scores for MLHFQ varied widely (median 52, range 7–101). The median GDS was 4 (range 1–13) but almost half (48%) scored higher than 10, a positive screen for depression. FACIT-Sp scores were moderately high (median 37.5, range 18–45), indicating many patients had strong spiritual well-being. Depression and spiritual well-being had a significant inverse relationship (r = −0.73, P < .01, 95% CI −0.87 to −0.48). Cronbach α was 0.91, 0.83, and 0.81 for the MLHF, GDS, and FACIT, respectively.
Conclusions
HRQL measures selected had good internal consistency reliability but needed to be supplemented with disease-specific items to avoid missing concerns (angina, light-headedness). Studies for minimally invasive aortic valve replacement should include these items. Further study among larger samples may direct further research; education and interventions for depressive symptoms; safe physical activity and fall prevention for those with light-headedness; and support for spiritual well-being.
RS2 Sedation as a Mediator of the Stress Response During Mechanical Ventilation
Mary Jo Grap, Jessica Ketchum, Cindy Munro; Virginia Commonwealth University, Richmond, VA
Purpose
Sedation goals are comfort and physiological stability even during noxious experiences. During stress, these may be manifested through the neuroendocrine response (via markers such as β-endorphin and salivary α-amylase). Noxious experiences may be unavoidable in critical care, but whether sedation ameliorates their adverse effects is unknown. The goal of this study was to determine the effect of sedation on physiological responses during and after a noxious stimulus (endotracheal suctioning).
Background/significance
Pain and psychological stress affect neuroendocrine response. Serum β-endorphin is a biological marker of response to noxious stimuli, including pain, psychological stress, surgery, and critical illness. Salivary α-amylase is a surrogate marker for ANS activation during stress and has been associated with increased plasma catecholamine, increased sympathetic activity, and decreased heart rate variability in several studies of psychological and physical stressful stimuli.
Method
Blood for endorphin and saliva for amylase were obtained immediately before and after endotracheal tube suctioning. Heart rate (HR), respiration rate (RR), oxygen saturation (Spo2), and arm and leg actigraphy (ACT; measure of discomfort) were continuously recorded before and after suctioning. Sedation level (RASS) before suctioning was categorized into 3 levels of sedation (deep, mild, alert). Paired t tests were used to compare endorphin and amylase values from before to after suctioning. Generalized linear mixed effects models were used to model the changes in HR, RR, Spo2, and ACT over time after suctioning for each RASS group, after controlling for baseline HR, RR, Spo2, and ACT, respectively.
Results
Sixty-eight subjects (mean age 55; 51.5% male; 50.8% African American) were enrolled from medical (63%) and surgical (37%) intensive care units (ICUs). They were deeply (37%) or mildly sedated (54%) or alert (9%). Amylase increased with suctioning (P = .04), but endorphins did not change (P = .58) and neither were modified by sedation (endorphin [P = .83]; amylase [P = .83]). Compared with the baseline, there was no change in HR, RR, or Spo2 through 30 minutes after suctioning, and changes were not modified by level of sedation. There was change in arm (P = .007) and leg actigraphy (P = .06) from baseline and changes in arm (P < .001) and leg (P < .001) actigraphy did differ with level of sedation (P < .001).
Conclusions
The goals of sedation in critical care are to provide physiological stability and patient comfort and should be effective even during noxious stimuli. Although one marker of stress (amylase) did increase during the noxious event, neither was affected by the sedation level. Subject movement did increase during the noxious event, suggesting that discomfort may have been present and since movement was affected by the sedation level, use of movement as a marker of comfort may be advantageous.
RS3 Delirium in Trauma Patients: Prevalence and Predictors
Breighanna Wallizer, Tiffany Blacklock, Karen McQuillan, Kathryn Von Rueden, Heesook Son; University of Maryland Medical Center, Baltimore, MD
Purpose
To determine the prevalence of delirium using the Confusion Assessment Method-Intensive Care Unit (CAM-ICU) in trauma patients in both ICUs and intermediate care units (IMCUs) and to identify characteristics of trauma patients that are predictive of delirium onset during hospitalization.
Background/significance
Delirium, an acute change or fluctuation in mental status, inattention, and disorganized thinking, has been reported in up to 87% of ICU patients and is unrecognized in up to 84% of patients. Delirium has been linked to increased mortality, morbidity, length of stay and cognitive dysfunction after discharge. Little is known about delirium in trauma or IMCU patients. Only a small prospective study and a retrospective study regarding delirium in trauma ICU patients were noted in the literature.
Method
A cross-sectional descriptive study was conducted on a sample of ICU and IMCU patients at an urban academic level I trauma center. Illness severity was measured by using the Acute Physiology and Chronic Health Evaluation (APACHE) III. Sedation level was assessed by using the Richmond Agitation-Sedation Scale (RASS). Delirium was evaluated by using the CAM-ICU. Of 800 patients screened for the study, English-speaking trauma patients without brain injury, history of psychosis, or cognitive impairment, significant hearing or vision loss, or a RASS score less than −3 were included. Eligible patients were assessed for delirium by 7 trained nurses. Descriptive statistics with correlations were conducted. Logistic regression was used to identify predictors of delirium.
Results
A sample of 215 trauma patients met inclusion criteria. Delirium prevalence was 24%; 36% (n = 113) in ICU and 11% (n = 102) in IMCU patients. Delirium was related to older age (P = .004), higher APACHE score (P < .001), lower RASS score (P < .001), use of mechanical ventilation (MV, P < .001), anesthetic sedatives (P < .001) and psychotropic agents (P = .001). A model with these 6 variables predicted delirium (P < .001) explaining 51.9% of variance (Nagelkerke R2 = 0.519). Delirium was more likely to occur in those receiving MV (P < .001, odds ratio [OR] = 4.726), receiving psychotropic agents (P = .16, OR 3.9), or with higher APACHE scores (P = .002, OR = 1.057). Those with higher RASS scores were less likely to have delirium (P < .001, OR = .31).
Conclusions
Inclusion of ICU and IMCU trauma patients in this study provides a view of delirium not previously reported, demonstrating that delirium occurs in both areas. The results suggest that trauma patients who are receiving mechanical ventilation and have high illness severity, a lower RASS score, or are receiving psychotropic agents were more likely to have delirium. Knowing predictors of delirium in trauma patients can identify those at highest risk, affording opportunity for early implementation of preventive strategies.
Infection
Ann Petlin, Cassandra Landholt, Paula Mantia, Kathleen McMullen, Donna Prentice, Marilyn Schallom, Carrie Sona, Jena Stewart; Barnes-Jewish Hospital, St Louis, MO
Purpose
To examine the impact of a bathing protocol that uses chlorhexidine gluconate (CHG) and bath basin management on the acquisition of methicillin-resistant Staphylococcus aureus (MRSA) in 5 adult intensive care units (ICUs).
Background/significance
MRSA is a virulent organism that causes substantial morbidity and mortality in the ICU. MRSA has been cultured from bath basins in ICUs, which may contribute to skin colonization. As part of a multi-institutional study, patients in the surgical ICU were bathed with chlorhexidine gluconate (CHG) antimicrobial soap. Results of that study demonstrated a reduction in MRSA acquisition.
Method
This study used a pre-/post-intervention design. Patients in the cardiothoracic, medical, and surgical ICUs had nasal swabs for MRSA on admission, weekly, and upon discharge. We defined MRSA acquisition in these 3 units as a positive nasal or clinical culture in a patient who had a negative admission swab. We defined MRSA acquisition in the coronary care unit and a second medical ICU as any patient with a new positive MRSA at any site 48 hours after admission. We designed a CHG bathing protocol by using a 4-oz bottle of 4% CHG soap in a bath basin of warm water. All nurses learned the bathing procedure and basin maintenance by the end of 2009. Implementation began in January 2010.
Results
Infection prevention personnel monitored MRSA conversion rates and reported them monthly. “OpenEpi” software for epidemiologic statistics calculated rate ratios for the preimplementation (July 2008–December 2009) and postimplementation (2010–April 2011) periods. In the preintervention period, 132 patients acquired MRSA in 34 333 patient days (rate ratio 3.84). In the postintervention period, 109 infections occurred in 41 376 patient days (rate ratio 2.63). The rate ratio difference is 1.46 (CI = 1.12–1.90, P = .003). Therefore, patients in the preintervention time frame were 1.5 times more likely to acquire MRSA than were patients who received the CHG bathing protocol.
Conclusions
The CHG bathing protocol is easy to implement and led to decreased unit-acquired MRSA rates in a variety of adult ICUs. Four-ounce bottles of 4% CHG soap are relatively inexpensive at about $2.30 each. Thus, this is also a low-cost intervention to prevent MRSA infection.
RS5 Assessing Acute Pain in Critically Ill Patients Who Are Unable to Communicate
Tracey Wilson, Karen Kaiser, Deborah McGuire, Diane Pannullo, Marguerite Russo, Debra Wiegand; University of Maryland Medical Center, Baltimore, MD
Purpose
Inadequate assessment of pain contributes to uncontrolled pain in critically ill patients. A valid and reliable instrument is needed to assess acute pain in critically ill patients who are unable to self-report. The purpose of this study was to determine the reliability, validity, and clinical usefulness of the Multidimensional Objective Pain Assessment Tool (MOPAT) when used to assess acute pain over time in critically ill patients in a medical intensive care unit (MICU).
Background/significance
The framework guiding this study is based on a multidimensional conceptualization of the pain experience. The MOPAT assesses behavioral (BEH) and physiological (PHYS) dimensions of acute pain. It has established validity, reliability, and clinical utility when used on a one-time basis in acute care settings, but additional testing was needed to demonstrate validity, reliability, and clinical utility with repeated use over time in the critical care setting.
Method
A convenience sample of MICU patients unable to communicate participated in this instrument testing study approved by the institutional review board. Pain was assessed at 2-hour intervals on day shift for up to 3 days and daily by a triad of nurses at 2 time points (T1 and T2) surrounding a painful event (eg, suctioning, turning). MICU nurses completed a monthly clinical utility questionnaire. Descriptive statistics were used to analyze patient and nurse demographic data and clinical utility data. Reliability was assessed by using percentage agreement between all nurse raters. Validity was assessed by using a paired t test for all raters to examine sensitivity of the MOPAT at T1 and T2 (ie, during and after pain).
Results
The sample consisted of 27 patients (164 pain ratings) and 20 nurses. Internal consistency (coefficient α) reliability surrounding painful events was .68 at T1 and .72 at T2. Coefficient α for the BEH dimension was .80 at T1 and .81 at T2, and was .37 at T1 and .57 at T2 for the PHYS dimension. Interrater agreement during pain was 68% for BEH and 80% for PHYS. Validity was demonstrated by significant decreases (P < .001) in MOPAT BEH, PHYS, and total scores when comparing T1 and T2. Based on responses (N = 146) to the clinical utility questionnaire items, MICU nurses agreed or strongly agreed (81.5%–99.3%) that the MOPAT was clinically useful.
Conclusions
When used over time, the MOPAT demonstrates acceptable reliability, excellent validity and is clinically useful. Additional testing of the MOPAT is needed in larger and more diverse critical care populations. If larger and more diverse studies demonstrate that the MOPAT is reliable, valid, and clinically useful, it can be incorporated into critical care settings as an easy-to-use instrument that will assist in assessing and managing pain in critically ill patients who cannot communicate.
RS6 Barriers and Facilitators to Quality End-of-Life Care for Children
Renea Beckstrand; Brigham Young University, Provo, UT
Purpose
In 2008, more than 53 000 children died in the United States, with the majority spending the end of their lives in acute care facilities, mostly in critical care units. Nurses caring for these children often are with them at the end of life (EOL). However, caring for these children is often affected by obstacles and barriers. For example, in a survey of 375 nurses working in neonatal and pediatric intensive care units, the highest perceived obstacles were language barriers and parental discomfort with withholding and/or withdrawing respiratory support. The highest perceived supportive behavior was allowing the family time alone with the child following death.
Background/significance
This study builds on other research documenting nurses’ perceptions of obstacles and supportive behaviors to facilitate end-of-life care among critical care nurses, emergency room nurses, rural emergency room nurses, and oncology nurses and to facilitate family presence during pediatric resuscitation. However, it reported only the quantitative results. Therefore, a need was identified to report the rich qualitative data from the neonatal/pediatric ICU nurses providing EOL care also gathered in the 2010 study. Consequently, for this study the research question is: What suggestions do neonatal and pediatric ICU nurses have to improve end-of-life care in children?
Method
Institutional review board approval was obtained for the mixed method study. In addition to the just-cited quantitative results, study participants in one study responded to an open-ended question about how they would change EOL care in neonatal and pediatric ICUs. Of the 375 neonatal/pediatric nurses who returned completed usable questionnaires, 225 (60%) offered one or more suggestions. These responses were analyzed and coded by the research team, including 2 qualitative nurse researchers, a doctorally prepared critical care nurse/advanced practice nurse, and a master’s prepared pediatric nurse. Themes generated individually were compared with those of others on the research team and the literature, with an interrater reliability of .95 between reviewers of the narrative data.
Results
Thirty-three percent of the sample were females, with a mean age of 41 years and a range from 24 to 63 years of age. Mean years working in the neonatal/pediatric ICU was 14.5 years, with a range of 1 to 38 years. Eighty-seven percent of study participants were certified critical care nurses, and 82% provide direct patient care. The overall theme was “Supporting the child making the transition from life to death.” One nurse suggested that it is important to, “Embrace the concept of a good death, a facilitated or supported aspect of critical intensive care.” Another study participant said that is important, “to be with a supportive family in a peaceful, quiet, private environment, free of pain.” Another nurse emphasized the importance of providing individualized EOL care, “Every family deals with death so differently. We have to be flexible with each individual situation. No one family has the same expectations or deals with the stress in the same way.” Several facilitators of quality EOL care for neonates/children in neonatal/pediatric ICUs, often with a high level of acuity, were identified by participants. Some participants felt their units facilitated the provision of quality EOL care, while others expressed frustration about their current work environment in providing such care. However, there was an overwhelming identification of facilitators to alleviate suffering. Themes included creating a peaceful passing through an appropriate environment; offering professional support and presence: making a difference; letting go when futility exists; increasing the quality of communication; and strengthening caregivers. One nurse beautifully expressed her vision, “Failure is often looked upon as death—however, I feel I fail families whom I cannot provide a careful, calm, loving environment when a child transitions from life to death.”
Conclusions
Nurses’ suggestions for improving neonatal/pediatric EOL care are closely tied to patient- and family-centered care principles, including demonstrating respect and fostering dignity, sharing information with patients and families, shared power (participation) in decision making, and family/professional collaboration. The importance of using rituals to help parents create meaningful moments as they experience a neonatal or pediatric death of their child cannot be overemphasized. Qualitative findings can be used to develop clinical protocols to guide EOL in neonatal/pediatric ICUs. Listening to the voices of the nurses who participated in the study may serve to foster discussions on improved care in specific neonatal/pediatric critical care units.
RS7 Barriers to Compliance with Ventilator Bundles in the Intensive Care Unit
Timothy Madeira, Andrea Jones; Georgetown University, Washington, DC
Purpose
To determine the perceived barriers to implementation of the Institute for Healthcare Improvement (IHI) ventilator bundle in an adult intensive care unit (ICU), as perceived by ICU nurses.
Background/significance
Ventilator-associated pneumonia (VAP) is a hospital-acquired infection (HAI) associated with increased morbidity and mortality, occurring in up to 24% of ICU patients intubated longer than 48 hours, and is regarded as the most fatal and most costly of HAIs nationwide. The “ventilator bundle” is a set of 5 interventions related to ventilator care that, when implemented together, will achieve significantly better outcomes than when implemented individually.
Method
This nonexperimental, cross-sectional, descriptive study, approved by the institutional review board, used a convenience, purposive sample of 41 registered nurses in 3 adult ICUs in an urban hospital. Nurses with less than 6 months of ICU experience were excluded. A pencil and paper survey comprising 31 questions was distributed to nurses for self-report on their barriers to (1) maintaining HOB = 30°, (2) gastrointestinal prophylaxis, (3) deep venous thrombosis prophylaxis, (4) 0.12% chlorhexidine mouth care, and (5) daily ventilator weaning (“sedation vacation”) trial. Associations between demographic characteristics and select survey questions where examined by using the Fisher exact test (P = .05).
Results
The sample (n = 41) yielded a 22.78% participation rate. A total of 7% were male and 93% female. Sixty-eight percent were ages 21 to 30, 24% ages 31 to 40, 5% were ages 41 to 50, and 2% were more than age 50. A total of 90% had a bachelor’s degree, 7% had a master’s degree, and 2% had an associate’s degree. The majority (56%) of respondents had 1 to 5 years experience. Six barriers were reported as statistically significant reasons for decreased compliance with performing the elements of the ventilator bundle consistently: (1) knowledge deficit, (2) not enough time, (3) patient discomfort, (4) medical contraindication, (5) medications and/or equipment out of stock, and (6) lack of equipment at the bedside.
Conclusions
By identifying existing barriers, researchers can begin to explore ways to improve conditions for the bedside nurse to overcome such barriers and increase ventilator bundle compliance. By lowering the incidence of VAP nationwide, nurses can effect positive change in the ICU. Decreasing the number of ventilator days a patient faces may shorten the overall length of stay in the ICU, lowering associated costs of ICU care, and reducing the mortality attributed to ventilator-associated pneumonia.
RS8 Caring for a Family Member with a Left Ventricular Assist Device as Destination Therapy: Changes in the Tasks of Caregiving After Implantation
Judith Hupcey, Lisa Kitko; The Pennsylvania State University, Hershey, PA
Purpose
To determine whether the tasks associated with caregiving (defined as level of burden of treatment for heart failure), for a family member with advanced heart failure, changed following the implant of a left ventricular assist device as destination therapy (LVAD-DT). In addition, we also explored what caregiving tasks changed from before to after implantation for these family caregivers.
Background/significance
Interventions for heart failure are continually being developed to extend life. One such intervention, the LVAD, which was originally used as a bridge to transplantation, is now being implanted permanently as a life-prolonging end-of-life treatment for patients who are not transplant candidates. Although the numbers of LVAD-DTs has increased exponentially, little research has been done, particularly investigating the role of family caregivers, who are so vital in the care of these patients and the device.
Method
The sample was recruited from a large rural tertiary care medical center with an active LVAD program. All family caregivers who accompanied the patient to LVAD clinic in a 4-month period were asked to participate. A visual analog scale (VAS) was developed to measure the burden of treatment. The VAS was measured on a horizontal 100-mm scale anchored by the words: least possible burden (0) to most possible burden (100). The caregiver was asked to mark the level of treatment burden that they felt before and after LVAD-DT implant. The caregiver was also asked to describe the tasks associated with caring for a person with an LVAD and how these tasks changed from before the implant.
Results
Caregivers accompanied 22 of the 25 patients seen in clinic and all participated. The average age of the care-giver was 63 years; the average age of the patient, 64 years. Average time since the diagnosis of heart failure was 12 years and from the LVAD implant 11 months. Caregiver’s perception of the level of treatment burden before LVAD average was 38 and after LVAD implant was 33. When the sample was split by time since implantation, the level of burden decreases further away from the implant. The caregiving tasks changed after LVAD implantation, with less focus on preventing exacerbations to tasks associated with the device. All caregivers expressed the emotional toll of accepting this device as the last treatment option.
Conclusions
Family caregivers are an integral part of the team providing care for a patient with an LVAD-DT, yet their needs and concerns are rarely addressed by health care providers. Although there was a decrease in the perception of burden among caregivers from before to after implant, the drop was not dramatic and caregivers were still providing an extensive amount of care. Caregivers need to be supported with caregiving tasks and emotionally as they live through this end-of-life experience.
RS9 Changing ICU Practice: The Nurse/Physician Collaboration Project
Paula Lusardi, Virginia Brown, Elizabeth Henneman, William McGee, Karen Shea, Mary Talbot; Baystate Medical Center, Springfield, MA
Purpose
Medical errors in the intensive care unit (ICU) are worrisome because of the potential negative effects on patients’ outcomes. The positive impact of nurse-physician collaboration on patient care outcomes has long been recognized, and nurses’ perceptions of ICU collaboration have been linked with better patient outcomes and nurse retention. The primary aims of this research were 2-fold: (1) improve nurse-physician collaboration and (2) decrease medical errors and potential liability risk.
Background/significance
The Institute of Medicine reports that about 48 000 to 98 000 preventable deaths occur annually as a result of medical errors, with a cost of $17 billion to $29 billion. The 1995–2004 Joint Commission Report suggests that poor collaboration resulted in 60% to 80% of medication errors. We track errors and communication in our ICU, but medical errors and nurse-physician collaboration have been variable. Authors suggest that efforts to improve effective interpersonal communication between nurses and physicians are crucial to reduce error risk and improve patient safety.
Method
This qualitative/quantitative quasi-experimental nonequivalent control group design included a convenience sample of ICU nurses and physicians. We reconfigured rounds, adjusted the timing of rounds, used daily goal sheets; implemented staff education processes...all to increase collaboration between nurses and physicians. We compared data from before and after the intervention, measuring nurse-physician communication (Shea Communication tool) and collaboration (Baggs Collaboration tool [CSACD]), analyzing 2 nurse-physician focus groups/consultants observations and observing rounding processes. We tracked other outcomes: error reports (medical errors, self-extubations, procedural errors, care coordination errors), recidivism, and ethics reports.
Results
Nurse participation: about 85% of time and if “in the circle” of discussion, frequent collaboration among physicians; Consultant participation: nurse-physician discussion variable in rounds; Focus groups: Rounds are like a roller coaster and are dependent on attendees; collaboration leads to ease of discussion; Collaboration; CSACD-Collaboration improved across groups but physicians thought they collaborated better than the nurses thought they did; Shea tool: physician communication improved across the group but several physicians had low scores; nurses had many negative comments about physicians; SRS reports: Significant difference (χ2) in medication errors that reached the patient; Recidivism: same; Ethics reports: same.
Conclusions
Active collaboration in rounds is dependent on the individual nurse and physician; rounds should be scripted and mandatory for nurses; attending participation consistency in rounds is needed; consistent with the literature, physicians think they collaborate/communicate better than nurses think they do; stronger collaborative intervention is needed to create a healthy work environment and decrease errors; nurses and physicians need to be respectful, attentive, and open to collaboration.
RS10 Comparison of Different Methods for Achieving Hemostasis After Arterial Sheath Removal
Mary Toma McConnell; Rex Healthcare, Raleigh, NC
Purpose
To determine if the use of procoagulant pads in combination with manual compression would decrease time to hemostasis compared with our institution’s manual compression alone procedure following arterial sheath removal associated with percutaneous coronary intervention (PCI). Progressive care staff nurses wanted to replicate this study and advocate for their patients with shorter compression times and earlier ambulation.
Background/significance
Despite increasing use of procoagulant pads in conjunction with manual compression for achieving hemostasis after arterial sheath removal, there are limited randomized controlled trials (RCT) to evaluate the impact of use of such pads on times to hemostasis and outcomes. Our (RCT) outcome measure was the time required to achieve hemostasis. Shorter compression times could be clinically beneficial in terms of patients’ comfort and staff time.
Method
A total of 80 PCI patients were studied (N = 26 manual compression only; N = 26 SyvekPatch NT; N = 28 D-Stat Dry). A convenience sample of PCI patients were randomly assigned to 3 methods for achieving hemostasis at the femoral artery site after sheath removal (manual compression alone; SyvekPatch NT plus manual compression; D-Stat Dry plus manual compression). Randomization was accomplished via a computer-generated random number sequence. Outcome variables included time to hemostasis, number of pressure applications, and development of complications. Analysis of variance and χ2 analysis were used to test differences between the 3 groups, with P less than .05 considered significant.
Results
A total of 80 patients were enrolled and completed study participation from April 2008 to June 2010. The total time of manual pressure application to achieve hemostasis averaged (SE) 22.3 (1.2) min for manual compression only, 17.8 (1.3) min for SyvekPatch NT, and 17.5 (1.4) min for D-Stat Dry. Statistically significant differences were found between the 3 methods for time to hemostasis (F2,77 = 4.77, P = .02), with the manual compression only method significantly longer than the SyvekPatch NT (P = 0.008) and D-Stat Dry (P = 0.010) methods. Complications were rare and were not significantly different with the 3 methods.
Conclusions
In this study, the time to achieve hemostasis with manual compression alone was found to average almost 5 minutes longer than either of the 2 other methods (SyvekPatch NT + manual compression; D-Stat Dry + manual compression). No differences were found in time to hemostasis between the 2 procoagulant pad groups. The study results showed a statistically significant reduction in time to hemostasis in the procoagulant pad plus manual compression groups compared with manual compression alone.
RS11 Comparison of the Use of a Compression Assist Device vs Manual Compression After Arterial Sheath Removal: Phase II
Beverly Dressel, Anne Digue, Rhonda Pugh; Barnes Jewish Hospital, St Louis, MO
Purpose
This phase II study compared the use of a compression assist device and manual compression following removal of femoral arterial sheaths in the cardiac catheterization laboratory (CCL) to determine if larger sheath sizes and longer compression times contributed to outcomes of clinician hand fatigue and confidence/satisfaction in achieving hemostasis. Incidence of patient complications was also studied.
Background/significance
Achieving optimal hemostasis following sheath removal is important to prevent vascular complications. The current practice is to apply digital pressure over the site or use the compression assist device. The phase I study of 4F sheaths showed no significant difference in clinician hand fatigue and confidence/satisfaction in achieving hemostasis between the 2 groups. A higher incidence of hematoma formation was found with manual compression (P < .001).
Method
After sheath removal, staff completed a Likert survey measuring those outcomes. Patients rated their pain on a scale from 1 to 10. Patients (n = 154) with removal of 4–8F sheaths were randomized into the compression assist or the manual compression group. A posttest only comparisons design was used with 4 variable sized groupings to compare the effect of the device and the sheath size: manual/4F sheath (n = 62), manual/>4 sheath (n=24), assist device/4F sheath (n=46), assist device/>4F sheath (n = 22). In the 4 groups, χ2 analysis was used to determine differences of complications and Kruskal-Wallis analysis was used to compare fatigue, confidence, and patient pain rating scores.
Results
No statistically significant difference in clinician hand fatigue or confidence/satisfaction in achieving hemostasis was found between the 4 groups except for shakiness (P = .03) with the highest mean score in the compression assist group with 4F sheaths (mean=4.22, SD=3.43) and patient pain rating (P=.02) with the highest mean score of (mean=3.87, SD=2.67) in the manual >4F sheath. Statistical significance (P = .02) was found for complications, with more oozing in both groups using the assist device (n = 7) than the groups using manual compression alone (n = 3). One hematoma was found in the manual compression groups and none in the compression assist groups.
Conclusions
The majority of the sheaths in each technique were 4F, which could have biased the results, thus we compared all sheaths >4F to 4F. In addition, staff preference to manual compression plays a key role in sheath removal technique. Because of the infrequent use of the compression device, this could account for the increased report of shakiness and incidence of vascular complications. Patient pain is possibly higher with manual pressure due to the smaller surface area in contact with the artery.
RS12 Depression, Anxiety, and Stress Among Nursing Students and the Relationship to Grade Point Average
Julie Floyd; The University of Tennessee at Martin, Martin, TN
Purpose
To improve understanding of the effect of depression, anxiety, and stress on semester grade point average (GPA) of nursing students. Understanding how psychological variables may affect GPA may assist universities in identifying students at risk of failure. Additionally, this study may allow educators to become more proficient at identifying students with psychological disturbances. Emotional stability is an important part of the potential success of college students and nurses.
Background/significance
The mental well-being of university students is a topic of increasing concern throughout the world. Limited research has been conducted on levels of depression, anxiety, and stress among university student nurses and the impact of these emotional states on GPA. If the signs of negative emotions are identified early on, students may be able to receive services to facilitate more positive coping skills, which would ultimately benefit the nursing profession.
Method
This research approved by the institutional review board is based on a 2010 study of university undergraduate nursing students that used a descriptive, correlational design. A demographic survey and the Depression Anxiety Stress Scale were administered. The study took approximately 15 minutes to complete. Second semester final grades were obtained by a research assistant. Descriptive statistics were used to determine prevalence of emotional states in the baccalaureate nursing population. A Kruskal-Wallis test was used to determine differences of the emotions on 3 levels. A multiple regression analysis was used to determine whether these emotions were significant predictors of GPA (4.0 scale) among nursing students.
Results
A significant result with anxiety among levels of nursing students was found (H[2] = 7.87, P < .05), indicating that the groups differed from each other, with level I having the highest reported anxiety. The combination of variables indicated statistical significance in predicting end of semester GPA, F3,113 = 3.50, P < .05, with stress and depression contributing significantly to the prediction. The R2 value was 0.19. The combination indicated statistical significance in predicting GPA, F3,113 = 6.174, P < .05, with amount of stress among level III students (P = .04) and depression among level III students (P = .05) contributing significantly to the prediction (R2 = 0.34).
Conclusions
These findings indicate that nursing students have a significant amount of anxiety during the first year of nursing. Level I nursing students have the lowest GPA, which indicates that a higher level of anxiety may potentially lead to poor academic achievement. Nursing faculty may be able to implement a method of improving coping and study skills while encouraging use of available resources to reduce anxiety and stress. Support for students is essential to increase graduation rates and reduce the nursing shortage.
RS13 Difference in Sedative and Analgesic Medication Use Associated With Augmented Sedation Assessment
DaiWai Olson, Meg Zomorodi, Christopher Cox, Michael James, Joseph Govert, Eugene Moretti, Carmelo Graffagnino; Duke University, Durham, NC
Purpose
Standard sedation practice for patients receiving mechanical ventilation is directed by observational scales such as the Richmond Agitation-Sedation Scale (RASS). Physiological measurement of consciousness using bispectral index (BIS) monitoring has potential to add to patient care. We performed a randomized controlled trial to test RASS-targeted sedation with a combined sedation assessment protocol using RASS and BIS monitoring.
Background/significance
Sedation is an important and complicated aspect of critical care. Oversedation increases mechanical ventilation duration and resource utilization. Undersedation is associated with agitation, device removal, and injury to self or to staff. Our previous pilot study of augmenting RASS-targeted sedation with BIS demonstrated a reduction in sedative use without an increase in undersedation events.
Method
This study approved by the institutional review board enrolled 300 critically ill patients intubated for less than 24 hours from 3 ICUs at 2 hospitals. Subjects were randomized to nursing-directed sedation protocol guided by RASS-only (standard care) or sedation guided by RASS and BIS (intervention group). Sedatives and analgesics were initially dosed to target a RASS of −2 and a BIS of 60 to 70. Health care providers could individualize the sedation goal for each subject if their medical condition changed. Sedative use was abstracted by a research coordinator for the entire ICU length of stay. Univariate and descriptive data were analyzed with SAS 9.4 (Cary, North Carolina).
Results
Subjects (51% male, mean age 55.8 years) were enrolled from 3 ICUs: mixed-bed (n = 74), trauma (n = 72), medical-surgical (n = 154). The mean length of mechanical ventilation was 7.1 days during which all sedative and analgesic medications were recorded and mean hourly doses calculated. About 90% of subjects received fentanyl at least once, and 77% received propofol. Other medications administered to subjects include midazolam (68%), lorazepam (45%) morphine (21%), hydromorphone (20%), or dexmedetomidine (17%). Standard care subjects received lower mean hourly doses of benzodiazepines, dexmedetomidine, and morphine (P < .001). Intervention subjects received less fentanyl, propofol, and hydromorphone (P < .001).
Conclusions
We found that BIS-augmented sedation assessment reduced the use of fentanyl, propofol, and hydromorphone, although it was associated with greater use of benzodiazepines, dexmedetomidine, and morphine. Future research should focus on exploring patterns of medication use among these 2 groups.
RS14 Donate Life Black Hills: Public Education Efforts Using Hospital Nurses as Trusted Messengers to Increase Organ Donation
Wendy Asher, Shaye Krcil; Rapid City Regional Hospital, Rapid City, SD
Purpose
To determine if implementation of a comprehensive media and community outreach education campaign using messages delivered by local nurses would increase community support for organ and tissue donation as measured by (1) surveys before and after the intervention and (2) donor designation rates. The study was supported by a grant from the Health Resources and Services Administration and was a collaboration between Rapid City Regional Hospital, LifeSource Organ Procurement Organization, and the University of Minnesota.
Background/significance
The donor designation rate in the South Dakota Black Hills was 40.8%, less than the 51% regional average. This area receives limited public education about organ donation because of its geographic location. Morgan’s Theory of Reasoned Action proposes that an individual’s attitude toward organ donation is a consequence of their knowledge and their personal values. Thus, lack of knowledge about organ donation could affect community members’ willingness to register as organ and tissue donors.
Method
The 4-county intervention community was paired with a matched comparison community where no intervention occurred. A 6-month multimedia campaign (television, print, billboard, radio) was implemented by critical care nurses who delivered the message: (1) organ donation saves lives, (2) organ donation is a way you can help others, and (3) nurses take good care of organ donors and their families. Nurses conducted education and outreach activities at 8 community events and 6 workplaces (>5000 employees). Surveys were mailed to randomly selected households in both communities before and after the intervention. A matched pretest-posttest design compared survey results as well as donor designation rates.
Results
About 89% of survey respondents in the intervention community saw the media ads featuring local nurses. Respondents had an increase in their organ donation knowledge measured by the change coefficient, and a statistically significant increase in multivariate adjusted self-report of (1) family discussions about organ and tissue donation, (2) likelihood of donating organs and tissues after death, and (3) self-report of donor designation. A dose-response relationship occurred: the greater the media exposure, the more significant the increase. There was no change in the donor designation rates reported by state drivers’ license bureaus in either the intervention or the comparison community.
Conclusions
The intervention raised community awareness and increased propensity toward organ donation. The high degree of media recognition may have been because the “faces” of the campaign were local nurses who were known and trusted by community members. The lack of increase in drivers’ license donor designation rates might be attributed to the fact that only 20% of drivers renew their licenses annually (5-year renewal period) so a large number of such renewals is necessary before significant changes are apparent.
RS15 Effect of a Nurse-Implemented Early Progressive Activity Program on Duration of Intubation
Elise Cundiff; Riverside Methodist Hospital, Columbus, OH
Purpose
To evaluate the feasibility and effect of a nurse-driven activity program for intubated patients and patients receiving mechanical ventilation in the intensive care unit (ICU); the hypothesis was that those patients who received early and progressive mobilization would have reduced duration of intubation, ICU length of stay, and hospital length of stay compared with patients receiving standard care.
Background/significance
The pathophysiology associated with bed rest is well researched in hospitalized patients. The potential complications of intubation and mechanical ventilation are also well documented. Several standards of care to decrease time of intubation and risk for associated complications have been adopted; however, the need to reduce ventilator days and improve outcomes continues. The literature strongly suggested that early mobilization might be an effective addition to the care of ventilator patients.
Method
Subject recruitment focused on those with uncomplicated respiratory failure. Patients who consented were randomized to receive either standard care or progressive activity. Patients in the intervention group had activity twice each day, based on the nurse-driven activity protocol, ranging from sitting in the chair position in bed to standing at the bedside. Data was collected about patients’ characteristics (age, sex, body mass index, score on the Acute Physiology and Chronic Health Evaluation (APACHE) II, sedation, prior ability at activities of daily living). The data were subjected to standard statistical analysis; to control for potential confounders, multiple linear regression was used to compare the intervention and control groups with respect to the target outcomes.
Results
The control and intervention groups were similar in age, APACHE II, body mass index, and sex; there was a trend toward both higher levels of sedation and lower prior ability at activities of daily living in the control group, but neither was statistically significant. Nine subjects were deemed nonevaluable, due either to withdrawal of life support or elective insertion of a tracheostomy tube. The patients in the intervention group had a decrease in mean hospital stay of 1.1 days, a mean of 18.3 fewer hours intubated and a mean of 22.6 fewer hours in the ICU compared with the patients in the control group. However, these differences were not statistically significant at P < .05.
Conclusions
Although the statistical analysis may have been affected by the small sample size and large standard deviations observed within both groups, the results are promising and consistent with previously published reports. A nurse-implemented activity program may safely decrease the duration of intubation, ICU length of stay, and hospital length of stay for patients with respiratory failure who require mechanical ventilation.
RS16 Effects of Early Initiation of Induced Therapeutic Hypothermia
Francisco Castelblanco; Mission Hospital, Asheville, NC
Purpose
To compare the effect of initiation of therapeutic hypothermia by emergency medical services (EMS) personnel vs emergency room nurses vs intensive care unit (ICU) nurses on patients’ outcomes as measured by the Glasgow Outcome Scale in cardiac arrest patients.
Background/significance
Out-of-hospital cardiac arrests are responsible for about 325 000 deaths annually in the United States, with a lackluster national survival rate of 8%. The Bernard and Hypothermia After Cardiac Arrest studies demonstrated that use of therapeutic hypothermia to treat cardiac arrests yields survival rates as high as 49% and 55%, respectively. Therapeutic hypothermia can be started in many different settings.
Method
This nonrandomized retrospective observational study will measure various data elements, patient mortality, and neurological status upon discharge. The subjects will consist of a convenience sample from a therapeutic hypothermia database of patients admitted in calendar years 2008 through 2010. The Glasgow Outcome Scale will be used to measure the subjects’ survival rate. The mortality rate will be measured by searching for the terms “expired, solace, or hospice” under the discharge disposition in the database.
Results
This study used a convenience sample of 178 consecutive cardiac arrest patients admitted from 2008 to 2010 to an 800-bed hospital in Western North Carolina. Out of the overall sample, 57 patients had a favorable neurological outcome (32% overall survival rate). EMS initiated therapeutic hypothermia in 24 instances, with 7 patients surviving to discharge (29% survival rate). The emergency department initiated therapeutic hypothermia in 17 instances with 8 patients surviving to discharge (47% survival rate). ICU nurses initiated therapeutic hypothermia in 137 instances with 42 patients surviving to discharge (31% survival rate).
Conclusions
Study results reproduced previous findings demonstrating the efficacy of therapeutic hypothermia in the treatment of cardiac arrests. The results also indicate a possible advantage to starting therapeutic hypothermia in emergency departments.
RS17 Evaluation of a Program to Ease the Transition of Trauma Patients/Families from a Critical Care to a Surgical Unit
Melanie Berube, Francis Bernard, Annik Gagne, Celine Gelinas, Andrea Laizner; Hôpital du Sacré-Coeur and McGill University Health Centre, Montreal, Quebec
Purpose
This pilot study granted by the Quebec Interuniversity Nursing Intervention Research Group aimed to evaluate the feasibility and the acceptability of a nursing intervention program to optimize the transition of trauma patients and their families from a critical care to a surgical unit and prevent adverse events.
Background/significance
The complexity of trauma patients’ clinical conditions placed them at risk for adverse events after the transition from critical care to a general care area. In this respect, clinicians have observed manifestations of stress among patients and their families, as well as patients’ complications that might have been prevented by improving processes related to transition. These issues have also been reported in many studies. A nursing intervention program was therefore developed to address difficulties inherent to critical care transition.
Method
The program was composed of 3 main categories of interventions. A grid indicating the interventions to be undertaken was created to guide nurses. Samples of 16 patients and families, as well as 12 clinicians (nurses and physicians) were recruited in 2 level I trauma centers. Trauma patients/families who spent more than 7 days in critical care and clinicians having more than 2 years of experience with the trauma population were invited to participate in the study. Individual interviews were performed with patients and families to determine if the program facilitated their transition, and focus groups were carried out with clinicians to establish if the program was deemed applicable in practice.
Results
Preliminary results showed that most patients (85%) and their families (79%) found the program helpful. The most appreciated interventions were the information provided with regard to difference in the level of care and being introduced to the receiving team. The continuity of care was considered the main aspect to improve. Clinicians reported that patients/families seemed to consider the transition more positively and that the level of care was decreased in a more timely manner. However, to render the program more applicable, the interventions grid should be simplified, dedicated resources for the introduction to the receiving team identified, and a better communication method established to optimize the continuity of care.
Conclusions
Findings from this pilot study provide information to amend a nursing intervention program dedicated to meet trauma patients and families’ needs with regard to critical care transition while being acceptable and feasible. An experimental design is planned to test the effectiveness of this program on trauma patients and their families. Such a program could potentially contribute to decrease anxiety, improve satisfaction with care, and ultimately prevent clinical deterioration of vulnerable patients.
RS18 Evaluation of an Insulin Transition Protocol in an Intensive Care Unit: A Before and After Study
Danielle Fraser, Leigh Anne Jacobson, Kathleen Jerguson, Leeanna Spiva; WellStar Kennestone Hospital, Marietta, GA
Purpose
To determine the safety and efficacy of a new standardized transition order set for converting patients from a continuous insulin infusion to a subcutaneous insulin regimen. The primary objective was to evaluate the effectiveness of the new protocol in maintaining target blood glucose levels (70–180 mg/dL) after discontinuation of the insulin infusion. Frequency of hypoglycemic and hyperglycemic events and the amount of correction insulin administered were also assessed.
Background/significance
Insulin infusions are preferred for management of hyperglycemia in the intensive care unit (ICU), and several protocols have been described in the literature. However, evidence is still limited on how to transition patients safely to a subcutaneous regimen. At the study site, transition regimens were inconsistent and at times unsuccessful, resulting in poor control of blood glucose levels. Therefore, a transition order set was developed to standardize the process for conversion from intravenous to subcutaneous insulin.
Method
A retrospective study was conducted by using the hospital’s online database of ICU patients treated with a continuous insulin infusion. Patients requiring an insulin infusion before development of the transition order set served as the “prior to protocol” group. After implementation of the order set, patients requiring an insulin infusion were included and were further separated into 2 groups: patients transitioned as the protocol recommended (per protocol) and those transitioned differently than the protocol recommended (off protocol). An investigator-developed data collection form was used. Data were analyzed with descriptive and inferential statistics by using SPSS v18.
Results
The per-protocol group differed significantly from the prior-to-protocol and off-protocol groups. Mean (SD) blood glucose levels were lower on day 2 and day 3 for the per-protocol group (day 2: 183.02 [54.89] mg/dL; day 3: 152.02 [46.39] mg/dL) than for the prior-to-protocol group (day 2: 189.16 [53.56] mg/dL; day 3: 183.86 [55.83] mg/dL) and the off-protocol group (day 2: 227.75 [61.22] mg/dL; day 3: 213.51 [75.14] mg/dL). The total amount of correction insulin was lower for the per-protocol group (10.2 [8.96] units) than for the prior-to-protocol group (13.0 [10.77] units) and the off-protocol group (16.8 [8.92] units). There was no significant difference in the rate of hypoglycemic events between groups.
Conclusions
The results of this study provide evidence of the safety and effectiveness of a transition order set. Patients transitioned per protocol had significantly better glucose control as demonstrated by fewer hyperglycemic events, lower mean blood glucose levels at 48 and 72 hours, and lower amounts of correction insulin used. Implementation of a standardized protocol improved glycemic control without an increase in hypoglycemic events in ICU patients transitioning from intravenous to subcutaneous insulin.
RS19 Evaluation of Applications of Pneumatic Compression Devices for Mechanical Thromboprophylaxis
Kathryn Killeen; Rush University Medical Center, Chicago, IL
Purpose
The primary aim was to assess current practices regarding prophylaxis of deep venous thrombosis (DVT) for patients in adult intensive care units (ICUs). A secondary aim was to evaluate potential differences in compliance with mechanical prophylaxis (sequential compression devices) associated with inpatient units and time of day.
Background/significance
Venous thromboemboli (VTE) are a significant source of morbidity and mortality in critically ill patients. Mechanical devices are used to prevent VTE, although their efficacy and role are less clear; the few studies have conflicting results. VTE prevention is recognized as an area of quality improvement and patient safety by public agencies and private organizations. Evidence on the safety, efficacy and implementation of these devices is lacking.
Method
This prospective, observational study used a convenience sample of patients undergoing mechanical ventilation during a 1-month period in 4 adult critical care units of an urban academic hospital. Patients were studied for the duration of mechanical ventilation, until death, or until the data collection period ended. Data collected on each patient included age, sex, unit location, and order for VTE prophylaxis. Twice daily, data collectors observed the identified ventilated patients, using a standardized checklist to evaluate application, operation, and adherence to ordered pneumatic compression devices (PCDs).
Results
Nine hundred sixty-six observations were made in 108 patients, 47 (44%) of whom received thromboprophylaxis with PCDs alone and 61 (56%) of whom received PCDs in combination with an anticoagulant. Errors in applications were found on 477 (49%) of the 966 observations. Patients received no PCD prophylaxis in 15% of total observations, and on 88 of 342 observations where PCDs were the only thromboprophylaxis ordered. Half (51%) of misapplications were related to improper placement of sleeves on legs. Misapplications did not differ in type or frequency between shifts.
Conclusions
PCDs are routinely used for prevention of VTEs in critical care. This method of prophylaxis is often not applied as ordered and intended in patients at increased risk for VTE. This information will further quality improvement endeavors by focusing efforts in areas most in need of attention for both clinicians and device manufacturers. There is an opportunity for nurses to provide leadership in extending knowledge of key factors that determine optimal application of PCDs to reduce risk of VTE.
RS20 Experience of Patients and Their Families in the Intensive Care Unit When the Patient is Physically Restrained for Intubation
Ruth Weyant, Melanie Roberts; Medical Center of the Rockies, Loveland, CO
Purpose
This is a phenomenological qualitative study to assess the experience of the patient and family in the intensive care unit when the patient is physically restrained for intubation/mechanical ventilation. The purpose of this study is to understand the experience from the patients’ and families’ perspectives.
Background/significance
A large portion of the literature on the use of physical restraints is from general hospital units and residential homes but not from the intensive care unit (ICU). Staff members need to know if patients remember being restrained while in the ICU. Having a better understanding of the patient’s experience and the perception of the families will allow the nurse to intervene with appropriate nursing interventions at this vulnerable time.
Method
This research study approved by an institutional review board used a convenience sample from a cardiovascular intensive care unit (CICU). The study population (n = 14) was divided into 2 groups: planned and unplanned intubation/mechanical ventilation. The planned group had education before intubation. The unplanned group had no education and was intubated emergently. The sample size was determined by saturation. Once written consents were obtained, interviews were conducted with patients and families using a semistructured interview tool consisting of 5 questions. Audiotaped interviews were obtained after the patients were transferred out of the ICU and before discharge. Recordings were transcribed for common themes.
Results
The results of this study are preliminary, and final results will be available for the poster presentation. Several themes were identified: anxiety is decreased when nurses communicate the reason for the restraints/therapy; patients described the experience of intubation/mechanical ventilation as intense with little or no memory of the physical restraint; families were reassured knowing that the restraints prevented the patient from pulling on tubes; nursing caring behaviors were articulated by patients and families going through this experience. The most important concept is the magnitude of the nurse communication with both the patient and the family around this experience.
Conclusions
Understanding the patient/family experience in this study can change how nurses view physical restraint. Patients clearly identify issues with pain related to intubation/mechanical ventilation. Pain management becomes the issue rather than being physically restrained. When pain is controlled, will the patient even need physical restraint? Nursing caring behaviors are profound to the patient/family. Awareness of the impact of these behaviors is essential to changing nursing practice.
RS21 Family Presence in the Cardiac Intensive Care Unit During Invasive Procedures and Resuscitation
Erica Edwards; Massachusetts General Hospital, Boston, MA
Purpose
To measure changes in perceptions of health care providers (HCPs) in the cardiac intensive care unit (CICU) before and after an intervention for family presence (FP) during resuscitation and invasive procedures.
Background/significance
In response to a growing demand from consumers, there has been a movement for FP during resuscitation and invasive procedures. Previous research on FP has been done with emergency department and pediatric personnel, but little research has involved ICU HCPs.
Method
The intervention consisted of presentations of the evidence to support FP, informational e-mails to HCPs, posting of pertinent articles, posters, group discussions, family feedback, and development of a unit-based guideline. A survey was sent before and after intervention to CICU multidisciplinary HCPs. The survey used was the Family Presence Risk-Benefit (RB) Scale and the Family Presence Self-Confidence (SC) Scale for resuscitation and invasive procedures. There were 83 HCPs, 43 before and 40 after the intervention: the mean age was 37.2 years, 9 were men and 71 were women; most were nurses (79% before and 92% after), and 10 other types of HCPs also participated.
Results
There was no difference in age, sex, education, or years of experience between the HCPs before and after the intervention. Perception of the benefits of FP during resuscitation improved significantly after the intervention (3.01–3.17, t = 2.6, P < .01). The RB scale for invasive procedures and the SC scales for resuscitation and invasive procedures demonstrated trends to improvement in benefits and SC. There was a significant increase in family members who were invited to be present during resuscitation (P < .02), although in the number of family members who were present during resuscitation did not change. Nurses felt that they were the best person to decide about FP during resuscitation and invasive procedures.
Conclusions
Perception of the benefits of FP during resuscitation improved after the educational program. This study revealed that nurses were more receptive to family presence at the bedside after education and guidelines were in place. The outcomes imply that it is imperative to educate HCPs in order to ensure that family presence is made available.
RS22 Human Factors Analysis of Code Cart Design and Utility
Tiat Kanthathin, Richard Fidler; San Francisco State University, San Francisco, CA
Purpose
In cardiac arrest, the speed of delivering emergency care has a direct impact on survival and outcomes. The purpose of this observational, descriptive, interprofessional project is to explore the human factors associated with finding critical items in a standard hospital code cart. The findings of this project are intended to serve as a basis for cart redesign, cart content adjustment, and to elicit staff input into making changes that shorten the retrieval time of emergency supplies.
Background/significance
In-hospital cardiac arrest continues to be a major public health problem with more than 200 000 arrests occurring per year, and the outcomes from in-hospital cardiac arrest have not improved significantly in decades. Human factors associated with finding items quickly in a code cart are poorly studied. By using direct observation and post-hoc video analysis techniques, clinician behaviors, movements, and use of visual aids while attempting to find critical items quickly are analyzed.
Method
A code cart was taken from hospital service to conduct this project. Using a dual-observer technique with multi-angle audio-video recording for post-hoc review, volunteer clinician participants were timed and observed finding 12 items. A data collection tool was collaboratively developed by nursing, medicine, and pharmacy. In addition to time analysis, human factors behavior analysis was performed by watching hand/eye movements, recording where participants first went to find an item, and whether drawer labels were used. A survey was conducted after the performance for attitudes and perceptions followed by a second timed performance finding the same items in scrambled order.
Results
Subjects spent on average 234 seconds (range, 97–419; SD, 86.9) finding the 12 items. Immediate post-survey performance finding the same items in scrambled order showed significantly decreased time to a mean of 32 seconds for all items (range, 23–53; SD, 7.2; P < .05) suggesting that exposure to the contents improved immediate recall. When finding emergency drugs, 53% opened the top drawer instead of the correct second drawer, suggesting that drugs are expected in the top drawer. Although 75% of subjects used drawer labels to help find items, 62% reported the labels as unhelpful. A major problem finding central catheter and intubation equipment was directly related to overcrowded drawer content covering supplies.
Conclusions
The current design of this code cart is functional but can be improved. Further research should focus on (1) the design of the cart itself, (2) intuitive placement and streamlining of contents, and (3) defining minimum staff training for proficient use. The nonuniformity of code carts among different hospitals was cited to add unfamiliarity with contents, making it reasonable to recommend standardized, evidence-based code carts that incorporate modern resuscitation science into the design.
RS23 Influence of Nurse Characteristics and Knowledge on the Likelihood to Request Physical Restraints in the Intensive Care Unit
Clinton Leonard, Richard Benoit, Elizabeth Chandler; Vanderbilt University Medical Center, Nashville, TN
Purpose
To determine critical care nurses’ knowledge of restraint regulations and effectiveness and to examine the influence that nurses’ personal characteristics and perception of potential patient harm have on their likelihood of requesting physical restraints or sedation.
Background/significance
Use of physical restraints is a nurse-sensitive indicator monitored by federal and accrediting agencies, namely, Medicare, the Joint Commission, and Magnet. Studies have shown (1) intensive care units (ICUs) have disproportionately high use of physical restraints, (2) ICUs vary widely across hospitals even when matched by type of ICU, and (3) use of physical restraints has been correlated with sedation in non-ICU settings. Little is known regarding critical care nurses’ decisions to immobilize patients physically or chemically.
Method
A factorial survey design approved by the institutional review board at 3 hospitals involved 300 nurses; only ICU data (n = 94) are presented. Survey instruments assessed hospital site, demographics, restraint knowledge, and perception of harm for each of 5 unique case vignettes with situational (eg, time of day) and clinical (eg, dehydration) variables resulting in falls or delirium. Outcome variables were likelihood (0 = not at all, 9 = absolutely) of requesting physical restraints or sedation for each vignette. An overall mean likelihood for use of physical restraints and sedation was derived for each nurse. Data were entered and analyzed in SPSS. Univariate and bivariate analyses were conducted for likelihood of use of physical restraints and sedation.
Results
Nurses’ mean age was 32.7 years, 87% were women, and 31% had an associate’s degree. Mean ratings for use of physical restraints ranged from 2.8 to 9.0, with an overall mean of 4.84 (SD, 9.0). Bivariate analyses showed that hospital C nurses (mean score for physical restraint use 5.47 vs 3.69 at hospital A and 3.96 at hospital B, F = 9.70, P < .001), younger age (r = 0.28), less knowledge (r = 0.38), and perceived harm (r = 0.66) (all P < .008) were associated with use of physical restraints. Mean sedation ratings ranged from 0 to 8.8, with an overall mean of 5.04 (SD, 2.03). Bivariate analyses showed perceived harm (r = .48, P < .001), less knowledge (r = 0.27, P = .01), and likelihood to use physical restraints (r = 0.39, P < .001) were associated with likelihood to sedate.
Conclusions
Nurses’ variation in use of physical restraints was influenced more by their knowledge and perception of patient harm than by personal characteristics, such as age or sex. Hospital culture also appears to play a role. Little variation, however, was found in use of sedation, perhaps because of the widespread use of sedation and analgesia protocols in ICUs. Until such evidence-based protocols are determined for use of physical restraints in critical care, variation in this practice will likely continue.
RS24 Is Severity of Delirium in the Intensive Care Unit Associated With Cognitive Impairment After Intensive Care?
Hideaki Sakuramoto, Taro Mizutani, Takeshi Unoki, Ryuichi Yotsumoto; Tsukuba University Hospital, Tsukuba, Ibaraki, Japan
Purpose
To examine the hypothesis that severity of delirium in the intensive care unit (ICU) is associated with cognitive impairment at hospital discharge.
Background/significance
The evidence suggests that development of delirium is associated with cognitive impairment after critical illness, although a relationship between severity of delirium and cognitive impairment is not well known.
Method
Prospective cohort study enrolling 79 consecutive ICU adults without dementia admitted to adult medical and surgical ICUs of a tertiary-care teaching hospital between July and December 2009. Severity of delirium was represented as score on Intensive Care Delirium Screening Checklists (ICDSC), and we assumed that higher ICDSC score indicated severe delirium. The patients were evaluated with ICDSC for every 8 hours during their ICU stay. After discharge of ICU, the patients were followed up for cognitive impairment with MMSE every 7 days until hospital discharge.
Results
Sixty-three percent of patients had delirium develop during the ICU stay. About 19% of patients had cognitive impairment at hospital discharge. Patients with delirium had more incidents of cognitive impairment after ICU discharge (P = .03). After adjusting for covariates, mean ICDSC score during the ICU stay and ICDSC score at ICU discharge were associated with cognitive impairment at hospital discharge (mean ICDSC score; adjusted odds ratio [OR] 1.6; 95% confidence interval [CI], 1.021–2.546; P = .04. ICDSC score at ICU discharge; adjusted OR, 1.6; 95% CI, 1.077–2.402; P = .02). Maximum ICDSC score during ICU stay was not associated with cognitive impairment at hospital discharge.
Conclusions
Our finding may indicate that severity of delirium and duration of delirium during ICU stay might be associated with cognitive impairment at hospital discharge in ICU survivors.
RS25 Is Your Patient Trendy? The Trends that Occur Before Pulseless Electrical Activity Cardiac Arrest
Sara Litecky, Tina Spencer; Harborview, Seattle, WA
Purpose
To determine if there are consistent clinical antecedents of hospitalized pulseless electrical activity (PEA) cardiac arrest. Additionally, relationships of sex, race, and age to PEA survival were explored.
Background/significance
In a 4-year retrospective study, using the internal database of the National Registry of CardioPulmonary Resuscitation, 97 patients who experienced in-hospital PEA cardiac arrest were identified. Data were collected on multiple variables at 2-hour intervals during the 24 hours before PEA.
Method
Significant trends were evaluated with STATA using a linear test for trend with significance at P < .05. Demographic characteristics were cross-tabulated with admitting diagnosis, cause of PEA arrest, and survival or death. Outcomes were analyzed in SPSS by using χ2 test and phi coefficients. Thirty-eight variables were analyzed for a statistically significant trend for 24 hours.
Results
Five laboratory values and 9 vital signs showed significant trends: phosphate (P = .003), international normalized ratio (P = .01), fibrinogen (P = .049), pH (P = .001), bicarbonate (P = .02), heart rate (P = .03), arterial catheter/noninvasive systolic blood pressure (P < .001), arterial catheter/noninvasive mean arterial pressure (P < .001), and nonventilated patients’ respiratory rate (P = .02), central venous pressure (P = .001), fraction of inspired oxygen (FIO2) for the nonventilated patients (P = .02), and SpO2 for the nonventilated patients (P < .001). Of the 97 subjects, 59% were male, 64% were white, and 72% were more than 50 years old. Sixteen and a half percent of the subjects survived to discharge. Clinical antecedents to PEA arrest that have not previously been reported were identified.
Conclusions
The most frequent primary admitting diagnoses were medical in origin. There were no significant relationships between survival and age, sex, or race. The findings of the study have been used in trend alerts for rapid responses, delta trend for heart rate, blood pressure, respiratory rate, and FIO2, have become a trigger alert for sepsis. There has been a decline in PEA cardiac arrests after initiation of rapid response team, early goal-directed therapy for sepsis, and palliative care services.
RS26 Learned Helplessness and Depressive Symptoms in Patients After Acute Myocardial Infarction
Benjamin Smallheer; Vanderbilt University, Nashville, TN
Purpose
Acute myocardial infarction (AMI) is associated with physical and psychosocial distress that can adversely affect health outcomes and disease progression. Mortality in individuals after AMI is adversely influenced by the degree of psychological distress the individual experiences. The purpose of this research study was to investigate the associations among learned helplessness, depressive symptoms, and targeted demographic, clinical, and psychosocial factors following an AMI.
Background/significance
A number of psychosocial factors are known to have an impact on the incidence and severity of depressive symptoms across clinical populations. Little is known, however, about the nature of the relationship between learned helplessness and depressive symptoms after an AMI. What was once believed to be a learned response in animals has been shown to be relevant to health outcomes in humans. Learned helplessness has the potential of affecting recovery from medical conditions, including AMI.
Method
Using a descriptive cross-sectional design, a convenience sample (N = 75) was recruited from 2 comprehensive heart institutes located in the southeastern United States. Subjects were individuals who had an AMI diagnosed within the past 12 months. Standardized instruments and measures were used to evaluate learned helplessness, depressive symptoms, self-efficacy, and social support. Demographic and clinical data also were collected for analysis. Descriptive statistics were calculated and bivariate analysis was conducted for the study and clinical variables. Hierarchical multiple linear regression analysis was also conducted to explore the potential unique and combined effect of study variables.
Results
A statistically significant, direct relationship was found between learned helplessness and depressive symptoms. No statistically significant associations were observed among the number of AMI events, the number of comorbid conditions, learned helplessness, and depressive symptoms. Statistically significant, inverse associations were identified among social support, self-efficacy, learned helplessness, and depressive symptoms. Hierarchical regression analysis suggested, after the influence of other study variables are controlled for, learned helplessness continued to contribute significantly to the occurrence of depressive symptoms in individuals after AMI.
Conclusions
Although the concept of depressive symptoms in patients after an AMI has been thoroughly evaluated, the contribution of learned helplessness to depressive symptoms has not been evaluated in this population. These results indicate learned helplessness is uniquely associated with depressive symptoms in individuals after an AMI. In developing treatment plans after AMI, health care staff need to expand their focus beyond the physiological and identify psychological points of intervention.
RS27 Multiparameter Predictive Monitoring for Hemodynamic Instability in the Intensive Care Unit
Mary Jahrsdoerfer, Larry Eshelman, Abigail Flower, Joseph Frassica, Brian Gross, K. P. Lee, Larry Nielsen, Mohammed Saeed; Philips Healthcare, Andover, MA
Purpose
Clinical decision support systems can help clinicians improve various aspects of clinical practice, particularly when they are present at the point of care. A multiparameter predictive monitoring algorithm has been developed for use as a tool for the early detection of impending hemodynamic instability in the intensive care unit, requiring clinical interventions.
Background/significance
Most existing real-time patient monitoring alert systems are limited to detecting the existence of urgent conditions. They rarely rely on multiparameter analysis and are subject to high false-alert rates. An automated multiparameter algorithm was developed that detects an impending acute hypotensive event hours before the event occurs, with the aim of maintaining a low alert rate and creating an actionable alert as a bedside tool to assist clinicians.
Method
Retrospective data from 25 hospitals including approximately 41 000 patients were analyzed. An event of “hemodynamic instability” was defined as a clinical intervention with a vasopressor, packed red blood cells, or significant amount of fluids. A variety of machine learning algorithms were used to create an index reflective of hemodynamic instability and to determine optimal thresholds for issuing alerts. Specificity was optimized over sensitivity to ensure that the issued alerts were actionable.
Results
The developed predictive algorithm performs with significantly higher specificity than existing real-time patient monitoring systems and can detect events an hour to several hours before the point of crisis, while maintaining a low alert load. Truly unstable patients have an average alert rate of 0.87 alerts per patient-day, while stable patients have an alert rate of only 0.08 alerts per patient-day.
Conclusions
The future of patient monitoring and clinical decision support lies in the development of algorithms capable of intelligently interpreting large amounts of physiological data and providing warning of impending pathological conditions in time for effective corrective action to be taken. The newly developed system tested in this study is aligned with this vision in the domain of hemodynamic monitoring in critical care patients, allowing more specific and earlier detection of acute events.
RS28 Novice Critical Care Nurse Performance of Defibrillation, Cardioversion, and Transcutaneous Pacing
David Schmidt; The Christ Hospital, Cincinnati, OH
Purpose
This study evaluated the ability of a convenience sample of intensive care unit (ICU, n = 22) and telemetry (n = 33) nurses to perform cardioversion, transcutaneous pacing, and defibrillation (CPD) in a skill test. Also, self-reported confidence in performing CPD was obtained before the skill check. All subjects had completed an Advanced Cardiac Life Support (ACLS) course during their critical care internship, which ended 6 (n = 22), 12 (n = 24), or 18 (n = 9) months before testing. No formal CPD practice occurred between the ACLS course and this test.
Background/significance
ACLS-certified nurses are responsible to act when a patient has a life-threatening dysrhythmia. Additionally, ICU nurses are often designated code responders and are expected to have the confidence, knowledge, and skill to operate the CPD equipment. Timely and accurate CPD interventions may decrease mortality and improve patient survival. Dyson and Smith report that defibrillation errors partially result from human error related to lack of experience and training.
Method
After approval was obtained from the institutional review board, 56 nurses who had completed the critical care internship within the past 18 months were invited to participate in the study. One nurse refused. Subjects were provided with an ICU nurse to care for their patient during data collection. Testing was performed individually after completing a demographic survey and a self confidence in critical care skill survey created by the investigator. A script was used to present the scenario for cardioversion, pacing, and defibrillation in that order. Subjects were instructed to focus only on how to manually perform the skill and were not required to demonstrate other ACLS tasks. Debriefing and practice was provided after testing.
Results
Most nurses held associate’s degrees (66%) and 2 had prior CPD skills. Five were charge nurses and routinely checked the crash cart and CPD equipment. Confidence in ability to perform CPD was rated on a scale of 0 (not) to 4 (very). Mean self-reported scores for the 3 CPD skills were 1.44 (SD, 1.29), 1.69 (SD, 1.05), and 1.18 (SD, 1.2), respectively. Cardioversion was performed correctly by 55%, but only 23% were able to pace. Most (92%) were able to perform the last task of defibrillation. There were no significant differences (P > .05) based on unit or internship cohort. One person was unable to perform any task, 40% performed 1 and 44% completed 2 tasks. Only 8 (14.5%) completed all tasks.
Conclusions
The idea for this study was conceived on the basis of anecdotal accounts of low confidence operating the CPD equipment with only ACLS training. This limited training is not enough for novice nurses, according to the self-efficacy and performance tested in this study. Further research is needed to see how self-efficacy and CPD performance can be improved. A follow-up study is currently in progress to assess the effectiveness of how monthly practice for 6 months may increase skill and self-efficacy in CPD.
RS29 Perioperative Predictors of Feeding Intolerance After Surgery in Infants With Congenital Heart Disease
Ju Yeon Uhm; Asan Medical Center, Seoul, Korea
Purpose
To determine perioperative factors associated with feeding intolerance following cardiac surgery in infants with congenital heart disease and to identify the relation between feeding intolerance and a vasoactive-inotropic score at the time of feeding initiation.
Background/significance
Nutritional imbalance is common in infants after congenital heart surgery. It is crucial to identify barriers of feeding intolerance and to provide adequate enteral nutrition in these patients. Previous studies reported about gastrointestinal morbidity and feeding algorithms for infants with a specific cardiac disease. However, perioperative factors contributing to feeding difficulties and the influence of vasoactive agents on feeding intolerance are not clearly understood.
Method
The charts of infants who underwent operations under cardiopulmonary bypass in 2008 were reviewed. During the review, we collected perioperative variables, Risk Adjustment for Congenital Heart Surgery (RACHS-1) categories and vasoactive-inotropic scores (VIS). Feeding intolerance was defined as a delayed feeding process due to significant residual volume, vomiting, abdominal distention, loose stool, or bloody stool. The study cohort was further divided into a feeding intolerance and a standard feeding group. A multivariable logistic regression model was used to compare variables between 2 groups. The relationship of feeding intolerance to VIS was evaluated by using Spearman correlations.
Results
Feeding was initiated in 234 postoperative infants. Sixty patients (25.6%) had more than RACHS-1 category 3 and mortality rate was 3.4%. Feeding was started on postoperative day 1 and the VIS at that time was 3 (0–42). Feeding intolerance occurred in 38 patients (16.2%). These infants had a longer stay (37.4 days vs 9.4 days, P < .001). In a multivariable model, time from surgery to initial feeding (odds ratio, 1.71; 95% CI, 1.31–2.23) and operative complications (odds ratio, 4.8; 95% CI, 1.77–13.0) were significant predictors of feeding intolerance. There was a weak correlation between feeding intolerance and VIS at the start of feeding (r = 0.198, P < .002).
Conclusions
In infants undergoing cardiac surgery, the incidence of feeding intolerance is high. Operative morbidity and delayed feeding initiation were associated with feeding intolerance. The amount of cardiovascular support at the time of feeding initiation has a weak influence on feeding processes. Future prospective studies and quality improvement initiatives are necessary to modify those risk factors to promote early feeding in infants after congenital heart surgery.
RS30 Monitoring of Pleth Variability Index in Seriously Injured Combat Casualties
Elizabeth Bridges; University of Washington School of Nursing, Seattle, WA
Purpose
In severely injured combat casualties (1) to describe the correlation between functional indicators from an arterial catheter-systolic pressure variation (SPV/SPV%), pulse pressure variation (PPV), and the noninvasive pleth variability index (PVI) during resuscitation and (2) to describe the accuracy of PVI for predicting fluid responsiveness on the basis of arterial catheter functional indicator thresholds.
Background/significance
Resuscitation of seriously injured combat casualties is complex. Vital signs may not reflect occult blood loss nor are they predictive of whether a patient will respond to a bolus with an increase in stroke volume, increasing the risk for fluid overload. Under combat conditions, invasive monitoring is limited; thus, noninvasive monitoring of fluid status and response to treatment is critical. The accuracy and reliability of PVI, a noninvasive indicator of fluid responsiveness, has not been studied in these patients.
Method
Prospective observational design. Severely injured combat casualties admitted to 2 US military trauma hospitals in Afghanistan were studied from admission through resuscitation. Continuous PVI data were obtained via pulse oximeter (Masimo Rainbow SET, Prove Rev E/Radical-7 Pulse Co-Oximeter v 7.6.2.1). Patients were ventilated (tidal volume [Vt] 7.7 [SD, 1.5] mL/kg; 72% had a Vt < 8 mL/kg). Vital signs and arterial catheter tracings for functional indicators were obtained every 15 minutes and before/after any bolus/therapy that might affect outcomes. Arterial catheter indicator thresholds for fluid responsiveness were used to establish a PVI threshold for fluid responsiveness. Data were reported for the subset of patients with more than 60 minutes of intensive care.
Results
A total of 15 patients were studied. Demographics: Injury Severity Score 21 (SD, 10); age 29 (SD, 8) years; male 100%; body temperature <95ºF (n = 1). Not significantly different from 10 patients who went to the operating room. Injury cause: improvised explosive device (67%)/gunshot (27%). Monitoring time 150 (SD, 59) min. A total of 81 PVI-arterial catheter indicator data pairs were analyzed. Each patient contributed 6 (SD, 4) pairs per indicator. There was a strong correlation between PVI and SPV (r = 0.61; SPV% r = 0.72 and PPV r = 0.73). Independent of Vt, a PVI > 15.5 discriminated fluid response status for SPV% (area under curve [AUC] = 0.89 [SD, 0.04]; sensitivity = 0.83/specificity = 0.92), PPV (AUC = 0.89 (SD, 0.04); sensitivity = 0.77/specificity = 0.97). PVI > 16.5 discriminated for SPV (AUC 0.73 [SD, 0.06; sensitivity = 0.66/specificity = 0.84).
Conclusions
This study was the first in which PVI was evaluated during the resuscitation of severely injured combat trauma patients. PVI correlates with other well-established functional indicators and can be used to predict fluid response status during the ICU phase of resuscitation. The noninvasive nature of the monitoring is potentially beneficial under austere conditions as it allows for immediate monitoring. The results of this study apply to all seriously injured patients. Further study of the use of PVI during transport is needed.
RS31 Predictors of Pressure Ulcer Development in Adult Critical Care Patients
Jill Cox; Englewood Hospital and Medical Center, Englewood, NJ
Purpose
To determine which risk factors are the best predictors of pressure ulcer development in adult critical care patients. Risk factors under investigation included total Braden scale score, mobility, activity, sensory perception, moisture, friction/shear, nutrition, age and blood pressure, length of intensive care unit admission, score on the Acute Physiology and Chronic Health Evaluation II, vasopressor administration, and comorbid conditions.
Background/significance
Pressure ulcer development in critically ill patients has been described as one of the most underrated conditions that plague this population. Currently, there is a lack of consensus regarding the risk factors that pose the greatest threat for pressure ulcer development in this population, in addition to the lack of a risk assessment scale that exclusively measures pressure ulcer risk in critically ill adults.
Method
A retrospective, correlational design was used to examine 347 patients admitted to a medical/surgical intensive care unit from October 2008 to May 2009. Descriptive statistics included frequency distributions of study variables and demographic data. Pearson product moment correlation was used for correlational analyses of the study variables, and direct logistic regression was used to create a model for predicting pressure ulcer development in critical care patients.
Results
In direct logistic regression analyses, the risk factors, age, length of stay in the intensive care unit, mobility, friction/shear, norepinephrine infusion, and cardiovascular disease explained a significant portion of the variance in pressure ulcer development in this sample of adult critical care patients.
Conclusions
Current pressure ulcer risk assessment scales may not capture risk factors that confront critically ill adults. Development of a pressure ulcer risk assessment model for use in intensive care units is warranted and would serve as the foundation for development of a critical care pressure ulcer risk assessment tool.
RS32 Pressure Ulcer Prevention in High-Risk Cardiovascular Surgery Patients
Regina Freeman; University of Michigan, Ann Arbor, MI
Purpose
The purpose of this quality improvement study was to decrease pressure ulcers in high-risk cardiovascular surgery patients. Unit-acquired pressure ulcer prevalence rates in the cardiovascular intensive care unit (CVICU) indicated an area for improvement. Sacral and heel pressure ulcers were most frequently identified in postoperative high-risk cardiovascular surgery patients. Multiple deep tissue injuries on the sacrum and occasionally on the heels were documented 24 to 48 hours postoperatively.
Background/significance
The development of pressure ulcers in hospitalized patients is considered a never event by the Center for Medicare and Medicaid Services. Current literature describes significant adverse monetary and safety effects of pressure ulcers in the hospital setting. Multiple causes contribute to pressure ulcers, including age, body mass index, moisture, heart failure, immobility, long operating room times, poor perfusion, inadequate nutrition, use of vasopressors, and a past history of pressure ulcers.
Method
In this quantitative, quasi-experimental quality improvement study, patients admitted to the CVICU, the preoperative unit, and the operating room were assessed for pressure ulcer risk on the basis of standardized criteria. Preventative soft silicone foam border dressings were applied to the sacrum and heels of at-risk patients. Visual skin assessments and chart reviews were completed for all patients by using a standardized data collection tool during the trial period. Data collection and chart reviews provided pressure ulcer rates before and after the intervention for sacrum/coccyx and heels in the CVICU from January 2011 through June 2011. Unit pressure ulcer rates were compared before and after implementation of the dressings.
Results
Of 287 patients admitted to the CVICU, 61.3% of data collection forms were received and 76.7% of those forms were able to be used for further analysis. Of these, 8.2% of patients did not meet criteria or receive preventative dressings. Further analysis was completed on 91.9% of patients who qualified for the preventative sacral and heel dressings. Of 124 patients with dressings applied, no heel ulcers developed and 5.7% of patients developed sacral pressure ulcers. Among the 5.7% of patients with sacral ulcers, 42.9% were stage I and 57.1% were deep tissue injuries. Preintervention CVICU pressure ulcer rates averaged 7.1% per month on the sacrum and heel and after the intervention averaged 2.7% per month.
Conclusions
The use of sacral and heel dressings applied preoperatively or on admission can reduce the development of pressure ulcers in high-risk cardiovascular surgery patients. Collaborative work through the continuum of care has fostered improved patient outcomes. Pressure ulcer prevention education and monitoring and the use of preventative sacral and heel dressings and skin assessments will continue in the preoperative unit, operating room, and CVICU to assist in maintaining low pressure ulcer rates.
RS33 Randomized Evaluation of the Effects of Guided Imagery on Sleep and Biomarkers After Cardiac Surgery
Jesus Casida, LaVonne Shpakoff; Wayne State University College of Nursing, Detroit, MI
Purpose
To evaluate the feasibility and acceptability of using a guided imagery program (Healthful Sleep) as a sleep-promoting intervention for patients after cardiac surgery. We hypothesized that patients who receive guided imagery will show (a) shorter sleep latency, (b) higher sleep efficiency and total sleep time, and (c) lower levels of stress (cortisol) and inflammatory (C-reactive protein, [CRP]) markers over time than patients who did not receive the intervention.
Background/significance
Guided imagery is a form of meditation and relaxation techniques used to direct the patient’s attention to a positive and tranquil state, leading to a deep natural sleep and consequently reducing stress and inflammation. Building on this knowledge that was derived from studies involving community-dwelling and nonhospitalized adults, we integrated the use of a guided imagery program in the postoperative management of sleep disturbances and stress/inflammation associated with cardiac surgery.
Method
We employed a pretest/posttest, repeated measures, control-group design. Fifty-two patients were randomly assigned into either the intervention (n = 27) or the no-intervention (n = 25) group. Patients in the intervention group used the guided imagery, delivered via an MP3 player with sleep headphones, for 1 hour between 10:00 pm and 12:00 midnight on postoperative days (PODs) 1 through 5. Sleep variables were measured with wrist actigraphy (Actiwatch 64W). Salivary cortisol and CRP levels were measured between 6:00 am and 8:00 am on PODs 1 to 5. A 7-item satisfaction questionnaire using a Likert response scale of 1 to 5 was administered on POD 6. We used IBM SPSS 19.0 for data management and analyses.
Results
Of the 52 patients, 40 (20 in intervention group and 20 in no-intervention group) had completed the study. Two-way repeated-measures analysis of variance showed no significant group interaction effects on sleep and biomarker variables, despite the data showing that patients in the intervention group had shorter mean sleep-onset latencies and lower mean cortisol and CRP levels than the comparative group. It is worth nothing that we found a within-group time effects on cortisol levels, which significantly decreased on PODs 4 to 5 (F = 6.047, P = .001). Overall, patients in the intervention group were satisfied (mean, 4.24; SD, 0.80) with the integration of a guided imagery program in postoperative care.
Conclusions
Guided imagery programs can be successfully integrated as sleep-promoting interventions in adult cardiac surgery patients, warranting further investigations. Future research should include a larger sample size to establish a definitive conclusion on the effects of guided imagery on the patients’ sleep, stress, and CRP levels postoperatively. Nonetheless, the findings contribute to the growing body of knowledge on the use and acceptance of complementary and alternative therapies in the ICU.
RS34 Risk Factors Associated With Occipital Pressure Ulcers in Hospitalized Children
Mary-Jeanne Manning, Martha Curley; Children’s Hospital Boston, Boston, MA
Purpose
To identify risk factors associated with development of occipital pressure ulcers (OPUs) in acutely ill children. Study results will assist in the design of effective nursing interventions that may decrease the occurrence and/or limit the severity of OPUs in these vulnerable patients.
Background/significance
Pressure ulcers are a serious yet preventable iatrogenic injury associated with acute care hospitalization. Prevention of pressure ulcers requires an awareness of risk factors associated with their development. Although several studies have reported the occiput as a common location for pressure ulcers in hospitalized infants and young children, risk factors associated with their occurrence have not been fully described.
Method
Retrospective chart review. Patients in whom OPUs developed while hospitalized were identified though snowball sampling of cases recalled by members of our Skin Care Special Interest Group as well as cases reported in our computerized Safety Event Reporting System (SERs) since its implementation in 2005. Data were extracted from the medical record by a clinical nurse specialist using a data collection instrument that included demographic variables, Braden Q scores, device use, and intervention-level data. The study was reviewed and approved by the institutional review board.
Results
A total of 62 cases of OPU were identified: 38% stage I, 12% stage II, 31% unstageable and 20% deep tissue injury. Median age was 12 months (interquartile range [IQR], 3–31). About 90% were intensive care patients with cardiovascular (52%) or pulmonary (31%) problems; 68% had comorbid conditions. Median hospital days before OPU identification was 17 (IQR, 9–34). At the time of discovery, 84% were receiving mechanical ventilation (of which 48% received neuromuscular blocking agents, 15% were supported on high-frequency oscillatory ventilation, and 8% on extra-corporeal membrane oxygenation). Although 74% were receiving sedation, scores on the State Behavioral Scale indicating agitation were recorded in 85% of “sedated” patients. About 50% were receiving vasoactive medications and 45% had a neck catheter that restricted head movement. When applicable, 52% had a documented score on the Braden Q Scale. When present, the median score on the Braden Q Scale was 16 (IQR, 14–18).
Conclusions
These data help us identify risk factors associated with development of OPUs in acutely ill children. Patients at risk are critically ill, require high-risk therapies and medical devices, and are less than 1 year of age. We believe that OPUs can be a “zero” event. Our next step is to design and test nursing interventions that will decrease the occurrence and/or limit the severity of OPUs in these vulnerable patients.
RS35 Standardized Approach to Patient/Family-Centered Care and Its Effect on Health Care Providers and Patients’ Families
Mary Beth Leaton, Adriana Castano, Brittney Fleming, Denise Fochesto, Laura Reilly; Morristown Medical Center, Morristown, NJ
Purpose
This study aimed to answer the following research questions: (1) Does an educational video demonstrating the use of a newly developed communication tool and a standardized approach to patient-family centered care (PFCC bundle) improve nurses’ and physicians’ knowledge, beliefs, and attitudes toward PFCC in an adult intensive care unit (ICU)? (2) Does the use of a newly developed communication tool and a standardized approach to PFCC (PFCC bundle) by nurses and physicians improve patient/family satisfaction?
Background/significance
The literature of critical care families’ satisfaction has demonstrated that families have expressed a need for a more inclusive role in clinical decision making. Despite voicing these concerns, families continue to feel uninformed regarding clinical decision making and day-today care of the patient. To date, little research has been done specifying interventions that can be easily replicated to address these needs.
Method
Quasi-experimental time series study design was used in a mixed-population adult ICU. To measure family satisfaction, the Critical Care Family Satisfaction Survey (CCFSS) was used. To measure health care providers’ knowledge and attitudes regarding PFCC, the Adult Provider Beliefs and Practices (APBP) survey was used. All ICU staff watched a video demonstrating skits implementing one of the 5 components of the PFCC bundle (family presence during nurse report/medical rounds, white boards, ASCEND model of communication, and direct care). Education of staff and implementation of the PFCC bundle occurred for 3 months. Surveys were distributed after the education and paired with presurveys for the APBP survey.
Results
The one statistically significant result was that more nurses agreed that families should be included in nurse shift report. Although not statistically significant, there were differences in which the majority of responses moved from a negative response to a positive response for the following survey items: “Families should have the option of being present during medical procedures.” “Families should be encouraged in being present in medical/teaching rounds.” Postsurvey results for the CCFSS demonstrated that families of our medical ICU patients were 100% satisfied with all items. Families of our trauma patients were no longer dissatisfied with availability of doctors.
Conclusions
Results demonstrated that the nurses and physicians who participated in the study had a positive shift in their attitude toward involvement of families during medical procedures and participation in medical/nursing rounds. They also had a greater sensitive to the environmental needs of families in the ICU. As a result of the PFCC bundle and education, those CCFSS items that addressed communication with the physicians and involvement in care or decisions showed the greatest improvement.
RS36 Sweet or Too Sweet: Keeping Best Practice in Nursing Protocols
Laurie Fitzgibbon; Aultman Hospital, Canton, OH
Purpose
Maintaining tight glycemic control in the intensive care unit (ICU) reduces morbidity and mortality. However, current evidence suggests that tight glycemic control does not necessarily benefit critically ill patients and often results in increased incidence of hypoglycemia. The purpose of this study was to see if changing glycemic protocol goals to 110 to 150 mg/dL would reduce episodes of hypoglycemia and improve glycemic control.
Background/significance
Evidence-based practice has become the mantra of critical care units that strive to meet multiple practice guidelines. Clinical practice protocols can improve quality of care and outcomes for patients. In 2005, the ICUs of this urban 700-bed hospital developed and implemented a glycemic control protocol with the goal to keep blood glucose levels between 80 and 110 mg/dL. During daily rounds, concern was raised about the swings in patients’ blood sugar levels and increased episodes of hypoglycemia.
Method
An interdisciplinary team approach and continuous quality performance improvement method using the Plan, Do, Check, Act format was started. The initial plan included (1) Identification of problem areas, (2) review of the literature for best practice, (3) Revision of the glycemic protocol, and (4) education of nursing staff and residents. A review of the literature supported changing the protocol goals to 110 to 150 mg/dL. The protocol was redesigned to facilitate ease of use, and nurses and residents were educated on the changes. The database for critical care services was used to randomly access the glycemic results of ventilator patients before and after implementation.
Results
A total of 275 medical and surgical ICU ventilator patients’ blood glucose results before implementation were reviewed. Of those patients, 36% of medical ICU patients and 25% of surgical ICU patients had mean blood glucose levels greater than 150 mg/dL. Additionally, 16% of medical and 15% of surgical ICU patients had hypoglycemic episodes (defined as <50 mg/dL). A review of 247 ventilator patients in the medical and surgical ICUs after implementation showed an overall 20% increase in glycemic control and fewer episodes of hypoglycemia by 76%.
Conclusions
A multidisciplinary and systematic approach to the development and review of clinical practice protocols allow evidence-based practice to become the gold standard of care. However, measurement of outcomes is fundamental to evaluating clinical guideline practice and improving patients’ outcomes. Nurses play a fundamental role in implementation of clinical protocols that reduce morbidity and mortality in ICU patients.
RS37 Teaching Together with a Stronger Approach, for Better Outcomes
Kristina Krumrei, Rachel Sherman; University Hospitals Case Medical Center, Cleveland, OH
Purpose
To identify patients’ perceptions and readiness for discharge related to the content, timing, and method of discharge information after being discharged to home from the intensive care unit (ICU) after a myocardial infarction (MI).
Background/significance
Patients who have had an MI are being discharged to home from the ICU in increasing numbers. Patients who experience a first-time MI are particularly vulnerable and most likely have limited receptivity to discharge teaching. Building the much-needed evidence base of ICU to home discharge teaching model must initially begin with patients’ perceptions and reports of the content, timing, and method of information.
Method
Descriptive cross-sectional design that used survey methods with a convenience sample of 85 MI patients who were discharged from an academic medical center’s coronary intensive care unit. The Coleman Care Transitions Measure was used to assess general discharge content and was modified to include items representing cardiac specific content. Timing was assessed by asking patients to rate their preference on 4 times for teaching. Method was assessed by asking patients to rate 7 visual and written methods for conducting discharge teaching. After approval was provided by the institutional review board, patients were approached within 24 hours of discharge. If they consented, a survey was mailed on the fourth day after discharge.
Results
The mean age of patients was 60 (SD, 13.1) years, most were male (56%) and white (80%). Most were married (60%), well educated (59%), and lived with a spouse or other relative. The mean length of stay was 4 (SD 1.8) days. Content of Teaching: The mean scores for the General Care Transition Measure and the cardiac-specific measure were 79.3 (SD, 15.0) and 68.6 (SD, 17.9), respectively. General and disease-specific discharge content was significantly different with regard to age, hospitalization, and sex. Most patients believed that the best time to teach was immediately before discharge (89%). Preferences for method of teaching were discussion (89%) and written information (67%).
Conclusions
Patients were more prepared to manage their general care at home and less prepared to manage their care specific to recovery from MI. Although most patients preferred discussion and written materials, as the method for teaching because women, older adults, and those previously hospitalized report being less prepared. We recommend an individualized approach. In these groups of patients, their preference for best method should be assessed and family members should be involved in teaching.
RS38 Tissue Oxygenation in Postoperative Patients: Is There a Relationship to Complications?
Carol Epstein, Karen Haghenbeck, Joan Madalone; Pace University, Pleasantville, NY
Purpose
This pilot study examined the relationship between tissue oxygen saturation (StO2) levels and postoperative complications in adult patients. Expressed as a percentage, StO2 represents the ratio of oxygenated hemoglobin to total hemoglobin in the skeletal microcirculation. It is expected that normal reference values of StO2 in this patient population will be described. Crookes, Cohn, and Bloch reported that in healthy volunteers (n = 707), the mean StO2 in thenar muscle was 86.6 % (SD, 6.4%).
Background/significance
Near-infrared spectroscopy is a noninvasive method of measuring differential forms of hemoglobin. Using the InSpectra Tissue Spectrometer, investigators reported that a cutoff StO2 value less than 75% was predictive of multiple organ dysfunction (MODS) in patients at risk for hemorrhagic shock. In surgical patients, as oxygen demand increases relative to global oxygen delivery, as occurs in postoperative rewarming, StO2 may decline as local blood flow is redistributed to vital organs.
Method
This prospective study was carried out in the postanesthesia care unit or the cardiothoracic intensive care unit of a level I trauma center. StO2 monitoring was completed for the first 2 hours in a convenience sample of 31 postoperative patients at least 18 years old. The StO2 probe was placed on the thenar eminence of the hand; values were recorded every 15 minutes. Patients were monitored for postoperative complications, based on time of onset and organ system. Nonparametric correlation coefficients were calculated in order to determine the strength of the relationship between StO2 values and the incidence of postoperative complications. A nondirectional P value of less than .05 was considered statistically significant.
Results
Thirty-one subjects (mean [SD] age, 57.52 [17.52] years) were studied. The mean (SD) body mass index of 31.19 (8.4) correlated negatively with the first StO2 value (Pearson r = −0.544, P = .002). Initial temperature and first StO2 correlated (Pearson r = 0.345, P = .051). The mean (SD) for the first, last, and average StO2 values were 80.71% (9.16%); 80.94% (7.07%), and 80.56% (7.18%), respectively. No relationship between sample mean StO2 and incidence of complications was found (n = 9, 29%). However, mean minimum values of the first, last, and mean StO2 values were 55%, 67%, and 65%, respectively. When the minimum StO2 for each patient was tested for differences in outcome, there was a significant difference in StO2 values.
Conclusions
Findings from this pilot study indicate that postoperative patients, overall, maintained a stable level of StO2 values in their early recovery phase. A small group had postoperative complications develop, the incidence of which was not significantly related to the sample’s first, last, or mean StO2 values. Sudden decreases in StO2 values, however, may be more sensitive in detecting the potential for complications. A case study will highlight the clinical utility of StO2 monitoring.
RS39 Treatment Intensity Level: An Instrument for Intracranial Pressure Management Burden and Nurse Staffing
Megan Maserati, Anita Fetzick, David Okonkw, Ava Puccio; University of Pittsburgh, Pittsburgh, PA
Purpose
To validate a published therapy intensity level (TIL) instrument in patients with traumatic brain injury, with the ultimate goal of measuring patients’ intensive care needs to drive nursing staffing. The TIL is a metric that focuses and quantifies the burden of intracranial pressure (ICP) management, allowing an objective measure to capture the intensity of care that is required of the medical team, but in particular, nurses at the bedside.
Background/significance
Prevention of secondary injury in TBI is the goal of the critical care team, with 1 episode of sustained hypotension increasing mortality and affecting neurological outcome. Staffing of nurses below target levels is associated with increased mortality. A means to validate TBI patients’ needs to justify increased nurse staffing is needed. A proposed TIL requires validation in a prospectively collected sample.
Method
A retrospective review of prospectively collected data was used from the Brain Trauma Research Center. The TIL (Maas) was calculated once a 12-hour nursing shift for the initial 5 days of hospitalization in the intensive care unit (ICU). TIL variables included sedation, body temperature, blood pressure, ventilation, ventricular drainage, osmotic therapy, coma induction, and surgery management. ICP burden was defined as highest ICP value for the shift. Pearson correlation (2-tailed) were performed by comparing total TIL value and ICP. Adult patients with severe TBI were sampled from 2 cohorts, low ICP burden and high ICP burden (defined as ICP values ≤ 20 mm Hg) to ensure an equal sample.
Results
Results were obtained from 20 adult TBI patients: 9 with a high ICP burden and 11 with a low ICP burden (mean [SD] age, 37 [17] years) were compared for TIL total values and ICP burden. Pearson correlation of the TIL (Mass) and ICP burden was highly significant (R = 0.622, P < .001). This measurement is an objective measure to quantify and illustrate ICP burden and the intensity of care that the severe TBI patient requires from the medical team, and in particular, the bedside nurse.
Conclusions
The TIL instrument is an accurate, objective measurement of ICP management among patients with severe TBI. We propose use of this instrument in the neurotrauma ICU to have a score that quantifies nursing care. This score could be used in patient rounds for discussion of individualized patient care, nurse staffing needs, and predictions of potential neurological outcome. Future directives are replication in a larger sample and comparison to length of stay and neurological outcome.
RS40 Using a Path Model Investigation of Nurses’ Performance During Critical Situations
Charles Reed, Andrea Berndt, Jennifer Browne, Rachelle Jonas, Ronald Stewart, Susanne Thees, Katie Wiggins-Dohlvik; University Hospital, San Antonio, TX
Purpose
This secondary analysis tested a concept model of factors affecting nurses’ performance during critical situations, based on the concept map developed by Wiggins-Dohlvik et al to explain physicians’ performance during critical situations. Factors tested included nurses’ perceptions about helpful/harmful traits, reported learning methods, and preferred coping methods. The goal was to see if significant relationships would suggest educational approaches to enhance nursing performance in critical situations.
Background/significance
In a recent study of surgeons’ performance during critical situations, unique characteristics, behaviors, and techniques were identified that may reduce physicians’ stress during critical situations. Identification of similar characteristics for nurses could provide a rich resource for the development of educational techniques for novice and experienced critical care nurses and assist in reducing stress and improving nurses’ performance during critical situations.
Method
A secondary analysis of nurses’ responses (n = 175 of 270 nurses) tested a path model by using responses from critical care, emergency department, and recovery room nurses in South Texas. The survey “Critical Care Nurses’ Performance During Critical Situations” was modified from a tool examining surgeons’ performance. Survey items included appealing/unappealing aspects of critical care, preparing for critical situations, techniques to improve skills, and descriptors/traits. Using SPSS 17, factor analysis identified shared themes used to calculate composite scores. The path model tested variables linked to the Wiggins-Dohlvik et al concept map. The final model reports only significant paths.
Results
The final model had 13 variables and 20 significant paths. Increases in years of nursing experience were positive predictors of increased certifications (r = 0.40) and increased fatigue (r = 0.15. Through certification, increases in nurses’ experience had small, positive effects on mental preparation (r = 0.10) and use of resources, like journals (r = 0.07), to prepare for critical situations. Increases in feedback methods was predictive of increased use of communication (eg, asking for help; r = 0.35, and increased internal mastery (r = 0.17; eg, prioritizing). Finally, increased internal mastery was a positive predictor of use of strategies related to successful performance in critical care situations (r = 0.22).
Conclusions
The findings suggest that the Wiggins-Dohlvik concept map is a useful starting point to understand nurses’ performance during critical situations. Although the model was not fully tested, techniques such as feedback, communication, and mental preparation were shown to have direct and indirect positive relations to enhanced perceptions of internal mastery and to successful performance strategies in critical care situations. Future testing may lead to new insights about critical care performance.
RS41 Using Continuous Monitoring of Patients’ Position to Evaluate Charting Accuracy in the Intensive Care Unit
Charles Reed, Randall Beadle, Andrea Berndt, Royce Johnson, Nanette Larson; University Hospital, San Antonio, TX
Purpose
(1) To evaluate the frequency of patients’ repositioning by using accelerometers attached to patients. (2) To assess the congruence between documented repositioning of patients, direct observations, and data collected with accelerometers attached to patients.
Background/significance
Repositioning remains the hallmark of pressure ulcer prevention in the acute care setting. Although repositioning every 2 hours is the standard of care, limited information exists on what actually occurs in clinical practice. Staff observations had suggested that charting of patients’ repositioning can often be inaccurate.
Method
This prospective study enrolled 35 patients receiving mechanical ventilation in a surgical trauma intensive care unit (ICU). To determine the frequency with which repositioning occurred and the accuracy of documented repositioning, continuous recording 3-axis accelerometer sensors were attached to participants’ sternums. Sensors were zeroed on each participant, calibrated with the manufacturer’s data, and sampled at 15-second intervals (M). Records of patient position (every 2 hours) were obtained from direct observations and the electronic medical record: restraint records and task flow sheet. The mean time between repositioning (recorded changes in position or measured change exceeding 15º for 15 minutes) was compared between methods.
Results
The ICU’s observed positioning records showed significant (P < .001) reduction in turning intervals on patients with sensors present vs patients without sensors. Data from patients with sensors was further analyzed and revealed a statistically significant (P < .001, paired t test) difference in mean turning intervals between day (2.28 [SD, 1.29] hours) and night (2.81 [SD, 1.90] hours) shifts. Recorded repositioning did not correspond to measured positioning: Sampling at 15-second intervals showed a mean time between turns of 2.54 (SD, 1.62) hours, whereas restraint records showed a mean (SD) of 2.01 (0.17) hours, the task flow sheet showed 2.18 (0.85) hours, and direct observations showed 3.75 (2.42) hours. There was no significant agreement between the charted records and the measured intervals for turning patients.
Conclusions
The electronic medical record did not reflect the measured activity of the nursing staff in this study. The presence of the sensor appeared to increase the turning rates. The continuous monitoring also revealed practice features not captured by the charting system. Improved charting and/or automated monitoring are recommended in order to capture the accuracy and frequency of repositioning of patients.
Footnotes
Presented at the AACN National Teaching Institute in Orlando, Florida, May 19–24, 2012.