RES30 No Patient Left Behind: Universal Screening for Palliative Care Needs in an Integrated Health System

Mary P. Hicks, Elizabeth Distefano; Saint John Hospital and Medical Center, Detroit, MI

Purpose: To evaluate the effectiveness of a “palliative care screening tool” (PCST) by identifying: (1) percentage of patients who had any palliative care needs, (2) palliative care team (PCT) referrals resulting from the PCST, (3) common criteria leading to PCT referral, (4) common palliative care needs for patients not referred to the team, and (5) burden reported by users of the tool. Background: As part of a larger project to increase attention to palliative care needs, including needs for spiritual care, throughout a hospital system, we developed and implemented a PCST that identified patients for PCT consultation and patients with less urgent palliative care needs. Methods: The PCST was pilot tested in the medical intensive care unit (MICU), 2 medical/surgical units, and 1 oncology unit; case managers and social workers completed the tool for each patient. Data were analyzed for the MICU and non-MICU separately. When a patient met the criteria, the patient’s attending physician was contacted to give the physician the information and suggest a palliative care consult. Quantitative and qualitative data were collected to evaluate the implementation of the tool, including number of PCSTs completed, disposition of the PCST, palliative care consults resulting from the PCST, reasons orders for palliative care were not written for patients meeting the criteria, and staff feedback. Results: The tool successfully identified patients in need of PCT consultation; 46% of MICU and 12% of non-MICU patients met criteria for PCT consultation. PCT referrals from the MICU increased 4-fold in 1 year. Qualitative data showed that the tool was well received in the MICU but was a burden to use on other units, primarily because of its length, complexity, and the relatively few patients identified. The PCST data revealed 10 primary patient criteria associated with PCT referral or “triggers.” Conclusions: Use of a longer screening tool in the MICU was effective in identifying palliative care needs and increasing referrals to the PCT. Incorporating screening criteria or triggers into existing assessment tools for nursing, case management, social work, and pastoral care services increased attention to palliative care needs and increased referrals to the PCT.

RES31 Nurses’ Certification, Education, and Experience Improve Sedation Management

DaiWai M. Olson, Suzanne Thoyre, Charles Stoner, Meg Zomorodi; Duke University, Durham, NC

Purpose: To explore the impact of specific individual characteristics of nurses (education, certification, experience) on sedative use in neurologically injured patients receiving mechanical ventilation who require continuous intravenous sedation. Background: Sedation management in the intensive care unit (ICU) is primarily the purview of the bedside staff nurse. This task relies heavily on the nurses’ assessment skills, which are developed through education and experience caring for ICU patients. Limited research has been conducted examining the characteristics of the nurses completing these assessments. The increasing focus on sedation assessment in the literature and the diversity of nurses in critical care units warrants further investigation of this topic. Methods: A prospective randomized study of demographic and educational criteria of 65 nurses working in a neurocritical care ICU. The nurse data were paired with patient data for 5536 hours of continuous intravenous sedation. Nurse data included highest degree, specialty certification, and years of experience. Patients were randomized to have sedation assessed either with the Ramsay Scale or with the Ramsay Scale and bispectral index monitoring. Patient data included baseline demographics, total sedative volumes infused during the period of mechanical ventilation, and number of undersedation events. Results: SAS software (version 9.1) was used to explore the impact of specific nurse characteristics on sedation delivery. After patients’ weight and group assignment were controlled for, certified nurses (CCRNs, CNRNs) used significantly lower propofol infusion rates (16.7 vs 18.5 mL/h; P < .001). The highest level of nursing education was a significant predictor of mean propofol use; associate degree, 21.8 mL/h; bachelor’s, 17.2 mL/h; master’s, 14.9 mL/h; and doctorate, 12.1 mL/h. No difference (P = .80) was found in frequency of undersedation events by nurse experience. Multivariate regression models indicated that nursing experience and critical care experience were significant (P < .001) for predicting sedation use (greater experience = less sedative use). Conclusions: The contribution of individual nurses is important and measurable. These data provide solid evidence that nurses’ certification and education affect patients’ outcomes. Nurses with higher educational degrees, specialty certification, and more experience used significantly less sedation than their counterparts, with no difference in undersedation events outcome. This study highlights the need to include nurse variables in future research on interventions in critical care.

RES33 Pain Assessment Tool in Nonverbal Critically Ill Patients in the Cardiac Postanesthesia Care Unit

Liza M. Marmo; Morristown Memorial Hospital, Morristown, NJ

Purpose: To compare 3 pain assessment tools for reliability in nonverbal critically ill patients. Background: Critically ill nonverbal patients often are subjected to unnecessary painful treatments and nursing procedures because health care providers are unaware of the patient’s pain. Critical care patients are at higher risk for untreated pain if they are unable to communicate. Pain that is persistent and untreated affects the body systemically and can result in postoperative complications, increased length of stay, morbidity, and chronic pain syndromes. Methods: A repeated-measure design was used with a convenience sample of 24 nonverbal critically ill cardiac surgery patients in the recovery room. Two testing periods were conducted, included 3 assessments with the Critical Care Pain Observation Tool (CPOT), Adult Nonverbal Pain Scale (NVPS), and the Faces, Legs, Activity, Cry and Consolability, totaling 6 pain assessments postoperatively. Two events were studied: suctioning and repositioning. Data were collected immediately before the event, 1 minute afterward, and 20 minutes after the event. Reliability was determined by using the Cronbach α statistic, correlation between raters (Pearson r) and percentage of agreement or disagreement between raters. Results: Both the CPOT and NVPS were very reliable with Cronbach α coefficients of 0.89. Overall, the NVPS and CPOT were highly correlated for both raters (r > 0.80, P < .001; 11/12 times). Correlations between the 2 raters were generally moderate to high. Higher correlations were noted between raters with the CPOT. There was more disagreement between raters in overall pain scores for the NVPS. When raters disagreed, it was most often in rating of the face on both scales. Conclusions: Both the NVPS and CPOT adequately captured pain in nonverbal critically ill patients. Nurses disagreed most often on the face component of both scales, followed by muscle tension. Disagreement was highest during the event, suctioning/turning. Pictures depicting facial expressions for scoring purposes are warranted. Adequate education and understanding of the use of these scales is critical for accurate assessment and subsequent interventions.

RES39 Spontaneous Variability in Functional Hemodynamic Indices: Enhancing Interpretation of Hemodynamic Changes

Susan Pambianco, Sarah Sprow, Sarah Barnes, Laura Sawin, Renee Harris, Sheryl Greco; University of Washington Medical Center, Seattle, WA

Purpose: To describe the spontaneous variability in systolic pressure variation (SPV and SPV%) over time in hemodynamically stable patients in the medical-surgical intensive care unit receiving mechanical ventilation. Background: The SPV and SPV% are functional hemodynamic indices that are sensitive and specific predictors of fluid responsiveness and may also be indicators of occult hemorrhage. Similar to research that has described the variability in pulmonary artery pressure, cardiac output, and mixed venous oxygen saturation, description of the normal fluctuation of SPV/SPV% may aid in the interpretation of changes in these indices and may serve to alert the nurse to an occult change in the patient’s status. Methods: Observational, repeated measures design. 30 stable (change in heart rate, <10%, no arrhythmias) patients receiving mechanical ventilation in a medical-surgical intensive care unit were studied. For 30 minutes, systolic blood pressure (SBP, from a referenced and optimized arterial catheter) and airway pressure measurements were obtained every 5 minutes. At each time, SBP from 9 ventilator cycles was recorded. No interventions occurred during the study. Analog copies of the waveforms were printed, and the indices were measured offline by using a digitized copy. The variability for each subject between the 9 measures was averaged and then the differences between each 5- and 10-minute period were compared by using a lag function. Results: Data from 28 patients were analyzed (2 excluded because of poor quality tracings or ectopy). Patients on assist control ventilation (tidal volume, 570 [SD, 150] mL; SBP 136 [SD, 22] mm Hg. One patient was receiving a vasopressor. The average SPV was 4.8 [SD, 3.6] mm Hg and the SPV% was 3.8% [SD, 3.1%]). The differences in average fluctuations between each 5-minute period (indicating the variability in the SPV and SPV% over time) was 0.1 (SD, 1.4) mm Hg (95% confidence limits [CL], −2.8 mm Hg, 3.0 mm Hg) and SPV% 0.1 (SD, 1.0%; 95% CL, −1.3%, 3.0%). For a 10-minute lag, the mean variability of the difference was SPV (0.2 [SD, 1.7] mm Hg; 95% CL, −3.1, 3.5 mm Hg) and SPV% (0.2% [SD, 1.2%], 95% CL, −2.0%, 2.5%). Conclusions: The SPV and SPV% were below thresholds indicative of fluid responsiveness, suggesting adequacy of resuscitation. The 95% CL indicate that a change in the SPV > ±3 mm Hg or SPV > ±3% over 5 to 10 minutes would be greater than expected variability, indicating a need to assess the patient for occult intravascular fluid loss or change in vascular tone. Results confirm research indicating that a change in SPV > 4 mm Hg occurs with significant blood loss and suggests an additional use for these indices.


RES1 A Comparison of Computer and Tradition Face-to-Face Classroom Orientation for Beginning Critical Care Nurses

Patricia A. Anzalone, Mary Lou Sole; University of Central Florida, Orlando, FL

Purpose: To examine the equivalency of knowledge attainment in the cardiovascular module of the Essentials of Critical Care Orientation (ECCO) electronic-learning (e-learning) program to traditional face-to-face critical care orientation classes covering the same content. Additional aims were to determine if learning style is associated with a preference for type of learning method, and to determine any difference in learning satisfaction between the 2 modalities. Background: Education of novice critical care nurses has traditionally been conducted by educators in face-to-face classes in an orientation or internship. A shortage of qualified educators and growth in electronic modes of course delivery have led organizations to explore electronic learning to provide orientation to critical care nursing concepts. Equivalence of e-learning to traditional critical care orientation has not been studied. Methods: The study was conducted by using a 2-group pretest-posttest experimental design. Forty-one practicing nurses with no critical care experience or education were randomly assigned to either the ECCO (n=19) or face-to-face (n=22) group. Those in the face-to-face group attended 20 hours of classroom instruction taught by an expert educator. The ECCO group completed the lessons online and had an optional 2-hour face-to-face discussion component. Pretest measures included the Basic Knowledge Assessment Test (BKAT-7), modified ECCO Cardiovascular (CV) Examination, and the Kolb Learning Style Inventory (LSI). Posttests included the BKAT-7, modified CV Examination, and the Affective Measures Survey. Results: The majority of subjects were female and educated at the associate degree level, averaging 9.9 (SD, 11.7) years of nursing experience. Classroom instruction was preferred by 61% of participants. No statistical differences were noted between groups on any demographic variables or baseline knowledge. Learning outcomes were compared by repeated-measures analysis of variance; no significant differences (P > .05) were found between groups. Preference for online versus classroom instruction was not associated with learning style (χ2 = 3.39, P = .34). Satisfaction with learning modality was significantly greater for those in the classroom group (t = 4.25, P < .001). Conclusions: This first study to evaluate the ECCO orientation program contributes to the body of knowledge exploring e-learning versus traditional education. The study results provide evidence that the ECCO critical care education produces learning outcomes at least equivalent to traditional classroom instruction, regardless of the learning style of the student. Replication of this study with a variety of instructors, expanded populations, larger samples, and different subject matter is recommended.

RES2 Accuracy and Precision of Buccal Pulse Oximetry

Marla J. De Jong, Katherine McKnight, Elizabeth Bridges, Patricia Bradshaw, Joseph Schmelz, Karen Evers; Medical Research and Materiel Command, Fort Detrick, MD

Purpose: To describe the accuracy and precision of oxygen saturation as measured by buccal pulse oximetry (SbpO2) compared with arterial oxygen saturation (SaO2) obtained from a radial artery blood sample and pulse oximetry (SpO2) measured at the finger in healthy adults at normoxemia and under 3 hypoxemic conditions. Background: Hypoxemia is a life-threatening complication common in wartime casualties. Continuous pulse oximetry monitoring can rapidly detect hypoxemia. Nurses may be unable to use traditional oximetry monitoring sites, such as the finger or earlobe, for casualties with amputations, severe burns, vasoconstriction, hypothermia, shock, or edema. Alternative monitoring sites such as the buccal pouch (cheek) have been used. Little is known, however, about the accuracy and precision of buccal oximetry. Methods: Healthy, nonsmoking adults without baseline hypoxemia, edema, dyshemoglobinemia, or fever participated in this prospective, within-subjects experimental study. The SbpO2, SaO2, and SpO2 values were recorded at normoxemia and at 3 hypoxemic conditions (SpO2 = 90%, 80%, and 70%). Hypoxemia was induced by using the Reduced Oxygen Breathing Device 2. The Bland-Altman method was used to assess accuracy and precision between SbpO2 and SaO2 and between SbpO2 and SpO2. The data were adjusted to account for a lag time between the buccal and finger sites. The standard by which precision of the buccal measure was judged clinically acceptable or interchangeable was set a priori at ±4% variability. Results: Data were collected from 53 subjects (32 [SD, 9] years; 37% male). Comparing SbpO2 and SaO2 values, mean differences (bias) of −1.8%, 0.3%, 2.4%, and 2.6% were found at the normoxemia, 90%, 80%, and 70% levels, respectively. Comparing SbpO2 and SpO2 values, the mean difference was −1.4%, 0.11%, 3.3%, and 4.7% at the normoxemia, 90%, 80%, and 70% levels, respectively. SbpO2 and SaO2 values met precision criteria (1.6%, 95% CL = −4.9%, 1.3%) for normoxemia. SbpO2 and SpO2 values met precision criteria at normoxemia (1.5%, 95% confidence limits [CL] = −4.4%, 1.5%) and 90% (1.9%, 95% CL = −3.6%, 3.8%) conditions. Precision exceeded a priori criteria at the 5 other test conditions. SpO2 lagged 21 [SD, 11] sec behind SbpO2. Conclusions: Buccal oximetry is an inaccurate and imprecise method of assessing SpO2 when oxygen saturation is less than 90%. Increased divergence between SbpO2 and both SaO2 or SpO2 values was noted as hypoxemia worsened. The buccal method led to overestimates of oxygen saturation in proportion to degree of hypoxemia. Such overestimations may falsely lead nurses to conclude that a patient’s arterial oxygen saturation is acceptable when, in fact, further assessment or intervention is warranted.

RES3 An Exploratory, Descriptive Pilot Study to Examine the Quality of Handoff Reports Using the ISBAR Technique

Wendy Savarese, Ruthann Zafian; Hartford Hospital, Hartford, CT

Purpose: (1) To determine if staff members who have been taught the ISBAR handoff technique are using the technique for handoff reports. (2) To examine the effectiveness of hand-off reports when ISBAR is used and when it is not used. Communication of information that is timely, accurate, complete, and directive between health care providers is essential to quality patient care. ISBAR is a type of handoff report technique that health care providers can use to improve the quality of their handoff reports. This institution implemented this handoff technique in the fall of 2006. One year later, the task force who chose ISBAR needed to quantify and qualify the use of the ISBAR technique among its staff. Background: As identified by the Joint Commission in Patient Safety Solutions, communication breakdowns were the leading cause of sentinel events in the United States between 1995 and 2006. Health care handoff best practice techniques and processes have not yet been established; however, lessons can be taken from other industries. The use of a common communication technique such as SBAR, used initially in the military and the aviation industries, provides such a lesson. Although articles regarding the use of SBAR are common in current health care publications, research studies on its use and effects are rare. Methods: Clinical leaders from the department of nursing and leaders from other departments who had been trained in the ISBAR technique were reeducated on use of the ISBAR technique, the data collection process for this study, and the use of the evaluation tool. They were then asked to witness 5 handoff reports in a 2-month period. Before the report was observed, verbal consent was obtained from the 2 parties. When the handoff report was complete, the interviewer interviewed the provider of the report and the recipient of the report. Completed data collection forms were returned to the principal investigator for statistical analysis and qualitative analysis. Results: 108 of 177 forms were returned (61%), 77 of 108 entered used ISBAR (71%), and 31 of the 108 entered did not use ISBAR (29%). Those who did not use ISBAR gave the following reasons: 23% prefer a different method (not specified), 23% did not want to use the method (thought “all that information” was unnecessary), 19% reported not being trained in ISBAR technique, 19% said they forgot, 10% did not give a reason, and 6% said ISBAR is too time-consuming. A meeting of the minds was said to have happened when there was 50% or more agreement between the actions the report provider asked for and the actions the report receiver actually reported hearing. When ISBAR was not used, a meeting of the minds occurred only 23% of the time. When ISBAR was used, a meeting of the minds occurred 58% of the time. A 2 x 2 χ2 test of proportions showed this difference to be statistically significant (χ21= 9.99; P = .002). The chance of important information being heard and understood by the report receiver is significantly increased when the ISBAR technique is used. Conclusions: Using ISBAR technique improves the success of handoff communications in health care settings. Further education and/or remediation is needed, with an emphasis on providing recommendations.

RES4 Are Patients in the Medical-Surgical Intensive Care Unit Delirious? A Point Prevalence Study

Orla Marie Smith, Callum Kaye; St Michael’s Hospital, Toronto, Ontario

Purpose: To measure the prevalence of delirium in the intensive care unit (ICU) and to compare delirium prevalence among predefined subgroups. Clinician documentation of delirium assessment also was evaluated. Background: Delirium is an acute state of disturbed consciousness and fluctuating cognition. ICU patients are at increased risk for delirium because of their older age, acute systemic illness, organ dysfunction, electrolyte abnormalities, psychoactive medications, sleep deprivation, and preexisting illnesses. Incidence of delirium is estimated at upwards of 80% in ventilated patients yet it is often not identified by ICU clinicians. Unrecognized and untreated delirium may result in increased morbidity and mortality. Methods: This prospective, observational study recruited 25 subjects from a 24-bed medical-surgical ICU (MSICU) in a 550-bed inner-city teaching hospital. All patients in the ICU during the study period without documented dementia or primary neurological dysfunction were enrolled unless they were deaf or unable to speak English. Enrolled patients were assessed at 3 time points (morning, afternoon, evening) on 3 different days to reflect the fluctuating nature of the condition of interest. Delirium screening was conducted by using the Confusion Assessment Method for the Intensive Care Unit (CAM-ICU). The study was reviewed and approved by the institutional research ethics board. Results: Thirty-nine observations were recorded in 25 participants. Participants were 88% male with a mean (SD) age of 65 (13.2) and a mean (SD) score of 16 (7.7) on the Acute Physiology and Chronic Health Evaluation (APACHE) II. A total of 56% were admitted to the ICU postoperatively and median (interquartile range) ICU length of stay was 17 (48) days. Delirium was present in 23% of subjects (n = 9), absent in 46% of subjects (n = 18), and not assessable in 31% of subjects (n = 12). No significant differences were found between delirious and nondelirious patients related to age, length of stay, APACHE II score, or duration of mechanical ventilation. Delirium was identified by the clinical team in only 1 of 9 patients with a positive CAM-ICU result. Conclusions: Prevalence of delirium in the study sample is similar to that reported in the published literature despite a higher number of males and longer stay for patients in our study cohort. Delirium screening could not be conducted in a significant proportion of patients because of high levels of sedation. Delirium was not routinely identified by physicians and nurses. Delirium education should be provided in tandem with education on sedation and analgesia assessment and management.

RES5 Central Venous Oxygen Saturation Levels in Patients With Severe Sepsis and Septic Shock

Susan Pambianco, Elizabeth Bridges, Susan Woods; University of Washington Medical Center, Seattle, WA

Purpose: To determine how often intermittent levels of central venous oxygen saturation (Scvo2) should be measured to detect an alteration in tissue oxygenation when continuous fiber-optic Scvo2 monitoring is unavailable. Background/Significance: Tissue oxygenation plays a key role in severe sepsis. Researchers have discovered that global tissue oxygenation levels can guide treatment and may improve patients’ outcomes. It is important to establish the frequency for measuring intermittent Scvo2 levels in patients with severe sepsis to assist clinicians in using Scvo2 as a guide for therapeutic interventions when continuous Scvo2 monitoring is unavailable. Methods: Medical records in 5 patients with severe sepsis and septic shock who were included in an equipment evaluation at the University of Washington Medical Center were retrospectively reviewed. Heart rate and Scvo2 data were gathered at 5-minute intervals using the Scvo2 Medical Record Review Tool for a total of 1853 data points. A frequency distribution for the variables was used to characterize the total sample. Data analyses used descriptive statistics for the continuous variable of Scvo2 and included range, mean, and standard deviation. Results: The continuously monitored Scvo2 for all 5 subjects was from 61% (SD, 3%) to 84% (SD, 5%). The frequency that Scvo2 values varied more than 5% from the patient’s mean Scvo2 was between 29% and 62%. A clinically significant change in heart rate preceded a clinically significant change in Scvo2 for only 1 subject (r = 0.74). If Scvo2 were measured every hour, a clinically significant change in Scvo2 (ie, Scvo2 that varied more than 5% of the patient’s previously obtained value) would have been missed in 10% to 32% of cases. Additionally, there was a trend of increasing missed points (up to 62%) as the time between measurements increased. Conclusion: The results of this study suggest that continuous monitoring is required to reliably detect clinically significant changes in Scvo2. If intermittent measures are the only option, they should be measured hourly. Other factors to consider when deciding on intermittent versus continuous measurements include the risk of iatrogenic infection, the risk of anemia, and the cost of supplies versus oximetric catheters.

RES6 Characteristics of Patients Readmitted to the Coronary Intensive Care Unit

Deborah G. Klein, Cleveland Clinic, Cleveland, OH

Purpose: To describe the population of patients readmitted to a 16-bed coronary intensive care unit (CICU) in a large Midwestern medical center in a 1-year period. By determining reasons for readmission and characteristics of the patients readmitted to the CICU, strategies can be developed and implemented that may decrease readmissions. Background: Research indicates that patients readmitted to an ICU have mortality rates up to 6 times higher than those not readmitted and are 11 times more likely to die in the hospital. Readmission rates vary considerably from 0.89% to 19% with a mean of 7.78%. Readmissions have been retrospectively examined in numerous studies; however, no clear indication of why ICU readmissions occur or what their common characteristics are have been found. Methods: Clinical nurse specialists (CNSs) from the CICU and progressive care units at a large teaching hospital tracked readmissions to the CICU from the progressive care units for 1 year. Data collected from retrospective chart reviews included sex, age, length of stay in the CICU before transfer, reason(s) for readmission, and medical diagnoses. Patient assessments, intervention(s), and response(s) to intervention(s) leading up to the readmission were also reviewed. Results: In a 1-year period, 31 patients (18 males, 13 females; mean age, 69 years) had 33 readmissions to the CICU. The readmission rate was 3%. The mean length of stay in the CICU before transfer was 6.6 days. A total of 97% had multiple comorbid conditions, including diabetes, hypertension, heart failure, acute myocardial infarction, coronary artery disease, renal failure, aortic stenosis, mitral valve regurgitation, aortic aneurysm, peripheral vascular disease, endocarditis, history of a stroke, and/or arrhythmias. The most common reason for readmission was respiratory distress, followed by arrhythmias and chest pain. Conclusions: The population readmitted to the CICU was older, sicker, and at risk for respiratory distress. The readmission rate of 3% was below the published average of 7.78%. Although it is unrealistic to expect no readmissions, strategies have been implemented that may reduce readmissions. Some of these include CNS to CNS consultation on complex patients before transfer, expanded handoff communication between nurses, and closer monitoring of patients’ respiratory status in progressive care units.

RES7 Chest Radiographs After Chest Tube Removal in Children

Cathy S. Woodward, Donna Dowling, Gordon L. Gillespie, II; University of Texas Health Science Center, San Antonio, TX

Purpose: Chest tubes are placed at the time of surgery in children undergoing repair or palliation of congenital heart disease. Upon removal of the chest tube, a routine chest radiograph is obtained to assess for the development of a pneumothorax, an infrequent but potentially serious complication of chest tube removal. The purpose of this study was to determine if signs and symptoms of pneumothorax are sufficient predictors of the need for chest tube reinsertion in children after chest tube removal after surgery. Background: The practice of obtaining a routine chest radiograph after chest tube removal is not supported by research. Chest radiographs are a sensitive and specific way to determine the presence of a pneumothorax but unnecessary chest radiographs (1) expose children to low levels of ionizing radiation, (2) are costly, and (3) use unnecessary radiology and nursing resources. Adult studies have shown clinical evaluation to be a safe and effective method of determining which patients require chest radiographs after chest tube removal. Methods: The study used a prospective, descriptive comparative design. The setting was a 200-bed children’s hospital in Texas. The sample included 60 children age 6 years or under who had undergone operative repair or palliation of congenital heart disease. A cardiac and respiratory assessment was conducted by a nurse practitioner principal investigator immediately before and 2 hours after chest tube removal. The nurse practitioner decided whether a chest radiograph was warranted and the routine radiograph was obtained and interpreted by a pediatric intensivist who did not know the nurse practitioner’s findings. Results: No subjects had clinical signs or symptoms of respiratory distress after removal of their chest tube and no chest radiographs were recommended on the basis of the nurse practitioner’s assessment. One subject had a small, clinically insignificant pneumothorax develop after chest tube removal that did not require any intervention. No significant pneumothoraces occurred. The risk of a significant pneumothorax developing in this study was less than 5%. Conclusions: The absence of clinically significant pneumothoraces after chest tube removal in this sample may not represent other populations of children with chest tubes placed for other reasons. The type of chest tube placed and the method of removal used at the study hospital may have influenced the findings. Based on the exact binomial distribution with 60 subjects, none with clinically significant pneumothoraces, the risk of a pneumothorax developing after chest tube removal may be low enough to consider not obtaining a chest radiograph.

RES8 Decreasing Sepsis Mortality Rates With an Early Detection Computerized Auto-Alert System

Darlene D. Baker, Casmen Oglesby; Arkansas State University, Jonesboro, AR

Purpose: To assess the effectiveness of an early detection computerized auto-alert monitoring system compared to a noncomputerized detection system in adult hospitalized patients. The study used the physiological parameters advocated for early sepsis screening by the Surviving Sepsis Campaign (SSC), which research has shown decreases sepsis mortality rates. Background: Sepsis affects approximately 750 000 people yearly with 200 000 of those patients dying, costing the United States health care system more than $16.7 billion per year. Sepsis is the leading cause of death in noncoronary critical care units and is the 10th leading cause of death in the United States. Early detection has positive outcomes for patients due to early intervention, which can prevent hypoperfusion and organ dysfunction. Methods: This quasi-experimental study uses a convenience sample of adult nonpregnant participants 18 years or older hospitalized within a 248-bed comprehensive metropolitan hospital in the mid South. The participants’ selection criteria were based on the physiological parameters currently recommended by the SSC. A system was developed by the quality improvement coordinator and information technology to send an auto-alert via pager and e-mail when at least 2 of the criteria for suspicion of infection and at least one organ dysfunction criteria (per SSC parameters) were charted within the computerized documentation system via hospital personnel. The page is sent to the medical response team (MRT) and the patient care coordinator (PCC) of the medical-surgical units. In addition, the information is e-mailed to the quality assurance coordinator and the chief medical officer. Results: An independent t test was used to evaluate the data for this study. Thirty participants in the computerized auto-alert detection system study were compared with 28 participants in a previous noncomputerized detection system study. The sepsis mortality was 36.7% (11 out of 30) for the computerized auto-alert detection system study compared with 50% (14 of 28) for the noncomputerized detection system study (t = .031, P = .05). A Levene test was conducted (Sig = .82, P > .05), proving the assumption of homogeneity of variance for the 2 groups. Conclusions: Based on the analysis data, the sepsis mortality rate is significantly lower in adult hospitalized patients screened by using an early detection computerized auto-alert monitoring system rather than a noncomputerized detection system that involves the use of the physiological parameters advocated for early sepsis screening by the SSC.

RES9 Delirium Assessment in Surgical Intensive Care Units

Diane P. Pagano; Flagler Hospital, St Augustine, FL

Purpose: Assessing delirium in intensive care unit (ICU) patients has helped to identify patients with higher mortality rates, longer length of stay, and added costs. Would the same results be seen in a surgical intensive care unit where patients have a 2.6-day length of stay? The Confusion Assessment Method for the Intensive Care Unit (CAM-ICU) tool was used to assess patients to see if similar results would be obtained in a short-stay unit. Background: Delirium in the ICU has been associated with an increased length of stay from 5 to 8 days. Hospital length of stay is increased from 11 to 21 days. Costs associated with the additional time in the hospital will increase from $13 000 to $22 000, and there is an increase of 3 times the mortality associated with delirium. The CAM-ICU tool was selected for its ease of use and the ability to monitor ventilator patients who are nonverbal. Methods: Data collection continued for 90 days in a 12-bed surgical ICU. Nurses assessed patients every 12 hours. In patients with positive results, nursing interventions were implemented and physicians were approached about the need for a psychiatry consultation (see tool and intervention). Random patients were selected to confirm the validity of the data collection. Daily census reports were compared with the number of delirium assessments done each day, and ongoing one-on-one updates assisted the staff when their results were in question or when an assessment had been missed. Results: Of 304 patients studied, 43 were positive for delirium. Twelve of the 43 patients died during the 90-day period, resulting in 27.9% mortality. Patients who were not delirious had a 7.6% rate of mortality. Length of stay in the ICU for the patients with delirium was 5.42 days as opposed to 2.12 days for the patients without delirium. All but 1 patient identified were found to have had several drugs prescribed that would alter the central nervous system: 19 received hydromorphone, 10 were given haloperidol, 16 had midazolam, 28 had lorazepam; 15 were given morphine for pain control and 21 were receiving propofol. Conclusions: Although most nurses can tell you by experience that confused patients require more care and have longer length of stay, the CAM-ICU tool provides a means to measure and identify these patients beyond disorientation to time and place. If we can identify patients who are delirious in a short stay ICU, we can identify those with a higher mortality rate and length of stay.

RES10 Describing Patterns of Use of Saline Locks in Patients Admitted to a Telemetry Unit: A Retrospective Study

Steven George Szablewski, Linda Thomas, Patti Zuzelo, Ellen Morales; Albert Einstein Healthcare Network, Philadelphia, PA

Purpose: This study was conducted to examine patterns of use of saline locks (SLs) in patients admitted with high-frequency medical diagnoses to a telemetry unit in order to determine the appropriateness of routine SL insertions and the acceptability of an evidenced-based decision tree stratifying SL insertion decisions on the basis of medical diagnosis and vein condition guided by professional expertise and patient preference. Background: Traditionally, SLs have been routinely inserted and maintained in patients admitted to telemetry as a precaution in the event of dysrhythmias requiring emergent intravenous drug interventions. SLs are associated with risk because of their potential for infection and phlebitis as well as patient discomfort and injury. Although risks associated with SLs have been examined, research on rates of SL use remains very limited and no standard of care regarding mandatory SL requirements has been published. Methods: In this quantitative, retrospective descriptive study, we reviewed medical records of 341 patients ages 21 to 101 (mean, 65.69 y; SD, 15.55 y) admitted to the telemetry unit in a 3-month period with one of the most common admitting diagnoses: acute myocardial infarction, heart failure, syncope/dizziness, and chest pain. Patient demographics, SL identifiers, and administered intravenous medications were transcribed by registered nurses using an electronic database. Classification of intravenous medications as either urgent or nonurgent was based on the American Heart Association 2005 Guidelines for Emergency Cardiovascular Care. Descriptive and inferential analyses were performed by using SPSS 15.0. Results: Although more than one-third of SLs were not used and more than half were used nonurgently, 12.9% were used for the urgent delivery of medications. SLs were used most frequently for urgent delivery (22.9%) in patients with an admitting diagnosis of heart failure. One-way analysis of variance revealed significant differences by admitting diagnosis depending on inpatient day for first use of the SL (F = 43.22; P < .001). SLs were used earliest in hospital stays of patients admitted with heart failure, with a mean first day SL use of 1.71 (SD = 1.94). Chi-square analysis of the frequencies of urgent, nonurgent, or nonuse of SLs revealed a greater than expected rate for nonuse and nonurgent use (χ2 = 76.41; df = 2; P < .001). Conclusions: The traditional practice of continuous SL access for patients requiring telemetry requires further scrutiny. Findings suggest that a select number of patients require SLs for urgent delivery of medications. Implementation and evaluation of an evidenced-based algorithm may be useful in determining the need for SL placement in certain telemetry patients. Additional research is essential in determining if the benefits outweigh the risks associated with the insertion and maintenance of SLs.

RES11 Diffusion of Innovation for Documenting Asthma Severity Scores at Triage in a Pediatric Emergency Department

Lisa Jackson, Kristi Frese, Gordon Gillespie, Diane Morris, Sharon Cooper; Cincinnati Children’s Hospital Medical Center, Cincinnati, OH

Purpose: It is important that emergency department (ED) documentation of asthma severity scores be included in the triage assessment and documentation of children with signs and symptoms of asthma such as wheezing, difficulty breathing, or a persistent cough. The purpose of this study was to test a social marketing campaign based on the Diffusion of Innovation theory to see if triage nurses would increase compliance with documenting asthma severity scores. Background: In the main ED treatment rooms, patients were prioritized on the basis of their triage acuity category. If multiple patients arrive with the same acuity category, then patients were seen according to their ED time of arrival. It was found that patients clinically could be significantly different within the same triage acuity category. Therefore, it was critical that asthma severity scores be documented at triage so that emergency nurses could better prioritize the order that patients in treatment rooms are seen. Methods: A prospective chart review was completed on all patients discharged with asthma diagnosis between ages 2 and 17. Triage documentation was audited to determine the triage nurse and documentation of an asthma severity score. Data were analyzed at baseline and biweekly until 2 weeks after the end of the social marketing campaign (27 weeks total). The social marketing campaign included a short PowerPoint presentation e-mailed to triage nurses, colored-informational flyers posted in staff restrooms and the breakroom, campaign logo placards placed on all 6 triage computer workstations, and logo ink pens distributed periodically to compliant triage nurses. Descriptive statistics were computed. Results: ED charts meeting compliance increased from 11.1% (n = 5) to 51.4% (n = 55) at the end of the study. The percentage of compliant nurses increased from 6.3% (n=2) to 50% (n = 25) at the end of the study. Initial increase in compliance was associated with the placement of placards on triage computers and the e-mailed PowerPoint presentation. After the initial month, minimal increases were seen after placing new placards or flyers. The only consistent marketing tool seen to increase compliance with triage nurse documentation of asthma severity scores was the in-person delivery of colorful ink pens with the logo followed by the words “Get WARM-E in triage!” Conclusions: The desired documentation change was seen in this study that used a social marketing campaign. Future campaigns aimed at changing practice would benefit by focusing on initial education of expected standards followed by periodic delivery of rewards such as ink pens or other items seen as valuable to triage nurses. However, if 100% compliance is essential due to patient safety or other standards, it may be necessary to include the practice change as a component of annual performance reviews.

RES12 Early, Single Chlorhexidine Application to Reduce Oral Flora and Ventilator-Associated Pneumonia in Trauma Victims

Mary Jo Grap, Cindy Munro, R.K. Elswick, Stephanie Higgins, Curtis Sessler, Kevin Ward; Virginia Commonwealth University, Richmond, VA

Purpose: Ventilator-associated pneumonia (VAP) occurs most often in trauma, burn, and surgical patients. Reduction of oral bacteria associated with VAP reduces the pool of organisms available for translocation to the lung. VAP reduction occurs with repeated chlorhexidine (CHX) dosing, but use of a single dose has not been studied. This randomized, controlled clinical trial tested an early (within 12 hours of intubation) application of CHX by swab versus control (no swab), on oral microbial flora and VAP. Background: VAP is responsible for 90% of nosocomial infections in the mechanically ventilated population. Growth of potentially pathogenic bacteria in dental plaque provides a nidus of infection for microorganisms that have been shown to be responsible for the development of VAP. Organisms associated with VAP colonize the oral pharynx of the critically ill patient before the VAP diagnosis. Reduction of organisms in the oral cavity using CHX immediately after intubation may reduce the incidence of VAP. Methods: A total of 145 trauma patients requiring endotracheal intubation were randomly assigned to either the intervention (5 mL CHX) or control group. Oral microbial flora from semiquantitative oral cultures and VAP (clinical pulmonary infection score [CPIS]) were obtained on study admission, and at 24 (oral culture data only), 48, and 72 hours after intubation. Trauma-injury and severity score (TRISS), illness severity (Acute Physiology and Chronic Health Evaluation [APACHE] III), and frequency of usual oral care were also recorded. A repeated measure proportional odds model tested for differences in the oral cultures between the groups and a repeated measures random effects model tested for differences in CPIS, with CPIS greater than 6 indicating VAP. Results: Of the 145 randomized patients, 71 and 74 were randomized to intervention and control, respectively. Patients were 70% male and 60% white. Mean age was 42.4 years (SD, 18.2); mean APACHE III score was 66 (SD, 29.8). There were no significant differences between groups at study admission for any clinical characteristic except higher CPIS scores (5.05 (SD, 0.28) for intervention vs 3.98 (SD, 0.27) for control, P < .01) and greater levels of positive oral cultures (18.3% intervention vs 5.6% control, P < .01). No significant treatment effect (P = .33) on oral cultures was found. However, a significant treatment effect (P = .02) on CPIS both from admission to 48 hours and to 72 hours was found. Conclusions: Since VAP had developed in 41.7% of the control patients with a baseline CPIS less than 6 (no VAP) by 48 or 72 hours vs only 19.4% of the intervention patients, this study showed that the use of a single dose of CHX early in the intubation period is effective in reducing early VAP, especially in trauma patients without pneumonia at intubation. Very few positive oral cultures were observed in this study, making it difficult to find changes in this outcome across time or between the groups.

RES13 Earplugs Improve the Subjective Experience of Sleep for Critical Care Patients

Carrie J Scotto, Carol McClusky; Summa Health Systems, Cuyahoga Falls, OH

Purpose: The goal of this study was to explore the effects of earplug use on the subjective experience of sleep for patients in critical care. Background: The negative effects of noise in critical care include sleep disturbances, increased stress response, and reduced patient satisfaction. The nature of critical care often precludes quiet time protocols. Previous studies indicate that the use of earplugs can improve rapid-eye-movement sleep and sleep efficiency. This study examined the effects of earplugs as a noninvasive method for improving the subjective sleep experience and increasing patients’ comfort and satisfaction. Methods: This quasi-experimental study recruited nonventilated, nonsedated adults admitted to critical care and randomly assigned them to the intervention or control group. The intervention group used earplugs during night time sleep hours allowing short-term removal during patient care. Before 12 noon the next day, all subjects completed the Verran-Snyder-Halpern Sleep Scale, an 8-question visual analog scale, to describe their subjective response to sleep. Differences between the intervention and control group scores were detected via t tests. Results: Eighty-eight participants (49 intervention, 39 control) completed the study. Mean age was 63 years, 56% were males (n=49), 93% were white (n=81). Total sleep satisfaction scores were significantly better for the intervention group (P =.002). Seven of the subjective categories were independently significant (P = .005–.04). One category, satisfaction with the amount of time needed to fall asleep, was not significant (P=.11). Conclusions: Earplug use improved the subjective experience of sleep for this group of critical care patients without interfering with care delivery. The intervention group did not make use of sedation and hypnotic medications and still reported more satisfaction with sleep. This low-cost, noninvasive method should be made available to patients to increase satisfaction with their sleep experience.

RES14 Effective and Efficient Glucose Management in the Intensive Care Unit

Jennifer A. Myers, Poome Chamnankit, Jacquie Steuer; Evanston Northwestern Hospital, Evanston, IL

Purpose: We hypothesized that comparable glycemic control could be achieved with a protocol that measured serum glucose every 2 hours instead of hourly. Background: Aggressive management of hyperglycemia in the intensive care unit (ICU) decreases mortality in selected patients. In 2004, we adopted the Yale Protocol. Although a retrospective analysis of 24 108 serum glucose values in 1497 patients at our institution demonstrated effective control of hyperglycemia, the protocol was labor intensive for nurses as it required hourly monitoring with a complex calculation of insulin dosing. Methods: We conducted a prospective pilot study involving 14 patients and 316 glucose measurements. Primary end points included (1) time to target glucose of 140 mg/dL, (2) number of glucose measures relapsing above 140 mg/dL once target was achieved, and (3) the incidence of hypoglycemia (<60 mg/dL). These end points were compared with the aforementioned historical control group. A secondary end point included nursing compliance with glucose measurements every 2 hours. Results: A target glucose level of 140 mg/dL was achieved in a mean of 9.5 (95% CI, ±5.5) hours by using the new protocol compared with 6.0 hours in the historical control group. Once target glucose was achieved, 43% of subjects in the pilot study had 3 subsequent values that exceeded 140 mg/dL compared with 50% of historical controls. Hypoglycemia occurred in 1.27% compared with 1.0%. Compliance with the every-2-hour measurements was 94.2%. Conclusions: Comparable glucose control can be achieved in the ICU by using a protocol with a 2-hour compared with a 1-hour measurement. This protocol was less labor intensive and nursing compliance was high.

RES15 Effective Pain Management Promotes Positive Patient Outcomes

Rachael Rivera, Andrea Nolan, Sarah Eccleston; Brooke Army Medical Center, Fort Sam Houston, TX

Purpose: The purpose of this process improvement (PI) initiative is to evaluate postintervention assessment of patients’ pain. Ineffective pain management is a common cause of detrimental patient outcomes, including higher morbidity and mortality rates. Poor pain assessments after a nursing intervention may be the primary reason for incomplete pain management and unsatisfactory outcomes. Background: In January 2001, The Joint Commission (TJC) implemented a new pain standard; pain is assessed in all patients. To ensure compliance, in 2002, Brooke Army Medical Center began a PI initiative to evaluate the current pain management practices throughout the hospital. After TJC survey in 2006, the PI committee discovered a need to improve the occurrence of pain reassessment 1 hour after medical intervention. In October 2007, weekly chart audits were started to evaluate nursing compliance. Methods: A retrospective chart review was done to compare nursing pain reassessment 1 hour after intervention before and after education. The setting was 10 inpatient nursing units in a 224-bed, level I trauma center. The sample involved 2858 inpatient charts. Weekly, all units randomly audited 10 nurses on pain reassessment after intervention, with an average of 85 different nurses per week. Data were collected for 42 weeks by using a standardized reassessment audit tool. The first 2 weeks of data were considered preeducation; the remaining 40 weeks were posteducation. The statistics were recorded and graphically trended to evaluate overall compliance with the benchmark of 90%. Results: Overall, after 40 weeks of posteducation data collection, nursing compliance of postintervention assessment improved 26%. In addition, these data revealed a new hospital average of 89.2%. This still falls short of Brooke Army Medical Center’s 90% benchmark; however, it demonstrates an impressive hospital wide improvement. The first 2 weeks in 2007, before education, nursing compliance was at 56%. Unit level statistics were trended as well, with a maximum improvement of 55% and a minimum improvement of 8%. Conclusions: This study shows that with weekly chart audits and nursing education, postintervention pain assessment has improved remarkably. Weekly chart audits, PI committee data distribution, and continuous staff education contributed to improved compliance. All units improved their efficacy in patient pain evaluation by successfully reassessing pain 1 hour after intervention. Brooke Army Medical Center is committed to the assessment, prevention, and treatment of pain.

RES16 Effectiveness of Bundle Implementation and Staff Education in Reducing Ventilator-Associated Pneumonia in the Pediatric Intensive Care Unit

Heidi D. Geary, Childrens Hospital Los Angeles, Los Angeles, CA

Purpose: To implement an evidence-based ventilator bundle applicable to children and to educate patient care providers in the pediatric intensive care unit (PICU) regarding its use in the prevention of VAP. Evidence suggests that the use of a bundle and an increase in knowledge and awareness about VAP among patient care providers will bring about a change in clinical practice and therefore decrease VAP rates in the PICU. Background: In 2002, the Centers for Disease Control and Prevention reported the average rate of ventilator-associated pneumonia (VAP) per 1000 ventilator days ranged from 2.2 to 14.7. Reports have shown that VAP can increase ICU and hospital length of stay, mortality, and total cost of care. In 2004, the 100 000 Lives Campaign developed recommendations for improving patient safety, addressing VAP. Since then, the effectiveness of these recommendations has been well documented in the adult population, but its significance to the pediatric population needs further development. Methods: Observational audits, which evaluated staff for current practice standards relating to the care of ventilator patients, were completed. A 45-minute VAP education program directed toward all registered nurses and respiratory staff in the PICU was implemented. This included the introduction of an evidence-based VAP bundle consisting of 5 interventions. The program was preceded by a 10-question test to assess for baseline knowledge of VAP. The test was repeated after the education to assess knowledge gained. Observational audits were then repeated to assess for adherence to the bundle. VAP rates were collected throughout the study and appropriate statistical analysis was applied. Results: Demographics of staff who completed the education was 100% of nurses and 62% of respiratory staff in the PICU. The changes in test scores before and after the education showed an increase in staff knowledge of VAP. The mean score of the pretest and posttest showed a difference of 3.6 (P < .001). The score was compiled from the number of correct answers out of 10 questions. When comparing observational audits before and after the education, 2 out of 5 bundle interventions showed a significant increase in compliance by staff (P < .001 and P < .002). VAP rate data have shown a downward trend since the initiation of this study and continue to be collected in order to substantiate these findings. Conclusions: Implementation of an evidence-based ventilator bundle applicable to children and education of patient care providers in the PICU regarding its use in the prevention of VAP was successful in decreasing the rate of VAP in the PICU.

RES17 Effects of a Rapid Response Team on Antecedents to Patient Deterioration: A Descriptive Study

Mary Jo Kelly; University of Washington Medical Center, Seattle, WA

Purpose: To examine the relationships between clinical triggers and rapid response calls in patients on acute care units, describe the prevalence and most common clinical triggers (missed and detected), and compare the mortality rates before and after implementation of a rapid response team (RRT). Background: Adverse events in hospitalized patients are responsible for preventable deaths. In-hospital cardiac arrest survival remained stagnant at 17% despite the evolution of code teams. Patients presented with physiological abnormalities (clinical triggers) several hours before their clinical deterioration. Mortality rates were higher when patients did not receive medical intervention before their clinical deterioration. RRTs were developed as an early intervention to improve patients’ outcomes and reduce mortality rates. RRTs provided highly skilled assistance to patients in an acute care setting when they had signs of physiological deterioration. At an academic medical center in the Pacific Northwest, an RRT was available to respond to a patient’s bedside 24 hours a day. This RRT evolved from a single stat nurse into an RRT 4 years ago. Methods: A descriptive study using a retrospective chart review was conducted at a 396-bed academic hospital in the Pacific Northwest. A random sampling of 84 patients admitted to acute care units who had experienced a negative outcome from January 1, 2007, to December 31, 2007, were studied. Mean age was 60.8 years old with 40 females and 44 males; 65% of the sample were white. Four databases were queried: death records, transfer to higher level of care, Code 199 records, and RRT Catalyst Tool. An existing rapid response data collection tool was used. The data were presented and analyzed by using descriptive statistics, measures of frequency of distribution, central tendency, and relationships. Results: The most common triggers for activating the RRT at this institution were acute mental status change, oxygen saturation <90%, respiratory rate >28/min or <8/min, and systolic blood pressure <90 mm Hg or 20% decrease from baseline. Assistance was summoned 87% of the time, resulting in a 13% rate of missed triggers. Patients presented with a mean of 2.5 triggers (SD, 1.4) when the stat nurse was the mode of activation and a mean of 3.1 triggers (SD, 0.96) when the RRT was the mode of activation. From onset of clinical trigger to disposition of patient when the stat nurse was called was 127 minutes (SD, 166 min) and 107 minutes (SD, 74 min) when the RRT was activated, which was not significantly different. No significant change was found in mortality. Conclusions: Education may increase recognition of clinical triggers, and this may reduce the 13% of missed triggers. Evaluation of causes for the delay from onset of trigger to disposition of the patient is needed to reduce the risk of harm to patients in acute care units. Investigation is also recommended to evaluate if the tachycardia trigger should be adjusted to a heart rate >20% above baseline to allow earlier recognition of patient deterioration. Implementing the stat nurse program most likely reduced mortality.

RES18 Endotracheal Tube Cuff Pressure: Changes Associated With Activity and Over Time

Mary Lou Sole, Elizabeth D. Penoyer, Xiaogang Su, Samar Kalita, Melody Bennett, Jeffery Ludy, Steven Talbert, Edgar Jimenez, Scott Mercado; University of Central Florida College of Nursing, Orlando, FL

Purpose: Critically ill patients often require an endotracheal tube (ETT) to facilitate airway management and mechanical ventilation. Little is known about the variability of ETT cuff pressure (Pcuff) in response to patient care and activities, and in the period between intermittent measurements (usually every 8 to 12 hours). The purposes of this study were to (1) assess changes in Pcuff associated with routine care and patient activities, and (2) describe the natural history of ETT Pcuff over time. Background: Pcuff must be maintained within a narrow therapeutic range. Pcuff is often measured and adjusted once per shift. Little is known about Pcuff in the interval between intermittent measures. Factors that decrease Pcuff (activity or time) increase the risk for aspiration and ventilator-associated pneumonia. If Pcuff is too high, the risk for tracheal damage increases. This study adds to the body of knowledge related to Pcuff and will assist in designing interventions to improve patient outcomes. Methods: This research is an analysis of data from a crossover study that tested an intervention to maintain Pcuff ; Pcuff was monitored continuously during a 12-hour shift for 2 consecutive days via a transducer and pressure monitor. Critically ill subjects who were orally intubated (n = 32) were enrolled; changes over time were assessed in 29 subjects who had data collected during the control day. Pcuff was adjusted to a minimum of 22 cm H2O at the beginning of the shift, and monitoring was begun. Patient activities (eg, turning, coughing, suctioning) were recorded by a research assistant on a tablet computer using software (Spectator GO! Biobserve, Bonn, Germany) that captures real-time data. Results: Subjects were a mean (SD) of 62 (20) years old, male (n = 25), and intubated for a mean (SD) of 4 (3) days. To compare the variation in Pcuff during activity, a data set consisting of repeated measures of variations during activities was compiled; data were compared by using a generalized estimation equation method. Significant increases in variation of Pcuff (P < .05) were noted with ETT suction, turning, movement, coughing, and procedures. Significant decreases were found after medication for pain or agitation. As a natural history model for Pcuff, the change pattern over time (control day) was assessed. A linear model was fit by regressing Pcuff on time for each patient; values decreased significantly over time (P < .001). Conclusions: The Pcuff was dynamic and changed frequently. Many activities increased Pcuff; values quickly returned to baseline. Pcuff decreased after administration of medications, which may be related to a higher sedation level or physiological effects on the airway; clinical signs such as low exhaled volume alarm or audible leak were not often seen. Since the Pcuff decreased over time, determining the optimum pressure to adjust the Pcuff, and frequency of assessment to prevent complications, is needed.

RES19 Evaluation of a Standardized Order Set for Planned Withdrawal of Life Support in Critical Care

Fiona A. Winterbottom, Deborah Bourgeois, David Taylor, Hiliary Luminais, Tracy Gregory; Ochsner Medical Center, New Orleans, LA

Purpose: To evaluate the impact of a standardized order set for withdrawal of life support on critical care nurses’ perceptions of end-of-life care. Background: Twenty percent of deaths in America occur in intensive care units (ICUs) each year. Many deaths are associated with withdrawal of life support. Nurses are often frustrated and emotionally drained by the end-of-life experience and families are dissatisfied with the dying process. Limited studies report that standardized orders are associated with increased staff and family satisfaction. The use of a standardized order sets may be effective in improving nurses’ satisfaction in orchestrating planned withdrawal of life support. Methods: Pretest-posttest design. Convenience sample of registered nurses working in 2 units. Nurses voluntarily completed a 14-item Likert scale survey using Survey Monkey before implementation of the order set. The order set was in place for 90 days. The postsurvey was open on the 91st day following start of the order set. Only those nurses who used the order set were eligible to participate. Reliability and factor analyses will be calculated for the survey. Nonparametric statistics will be calculated for each variable; chi-square tests to test for group differences. All statistical analyses are 2-tailed, α = .05. Results: Pretest sample: 70 of 100 (70%) eligible nurses completed the survey; 35/70 (50%) strongly agreed that order sets would improve withdrawal end-of-life care; 32/70 (45%) strongly agreed that they were comfortable with the withdrawal process; and 5/70 (7.1%) strongly agreed that current orders for withdrawal were clear. Once the order set has been implemented 90 days, the posttest data collection will begin. Conclusions: Conclusions will be based on the final data analysis once all the data have been collected.

RES20 Examination of the Diaphragm Following the Infusion of Different Fluid Resuscitation Therapies After Hemorrhagic Shock

Janet D. Pierce, Amanda Knight, J. Thomas Pierce, Richard Clancy, Joyce Slusser; University of Kansas, Kansas City, KS

Purpose: The first goal of this study was to compare the effects of different fluid resuscitation therapies on diaphragm shortening (DS), blood flow, hydrogen peroxide concentration, and apoptosis after hemorrhagic shock (HS). Our second goal was to evaluate the effects of dopamine on diaphragm performance by measuring diaphragm shortening, blood flow, apoptosis and hydrogen peroxide after HS. Background: HS is a leading cause of death among trauma patients. Proper management of fluid resuscitation is crucial to treatment of patients who experience a hemorrhagic event. Intravenous fluids administered to HS patients may include lactated Ringer solution (LR), dopamine (DA), and Hespan (hetastarch). To our knowledge, no published studies have involved measurement of diaphragm performance during HS with or without fluid resuscitation. Methods: This randomized, controlled study used anesthetized male Sprague-Dawley rats. We measured arterial blood pressures, blood flow, respiratory rate, and DS. The change in diaphragm thickness during inspiration was used as an index of DS. Approximately 30% of blood volume was rapidly removed to elicit HS. After 60 min of HS, LR, LR + DA, Hespan or Hespan + DA was infused for 30 min. Half of the diaphragm was excised to measure apoptosis by using fluorescent microscopy. The other half of the diaphragm was used for assessment of hydrogen peroxide concentration by using laser scanning cytometry. Blood flow was assessed by using fluorescent microspheres. Data were analyzed by using mean (SD) and analysis of variance with mixed model. Results: At baseline DS (in millimeters) was 0.61 and significantly increased after 60 min of HS to 0.86. After LR was infused, DS decreased to 0.60, whereas with LR + DA, DS was maintained at 0.92. DS was 0.89 during Hespan administration and 0.97 during Hespan + DA administration. Before HS, diaphragm blood flow (mL/min per gram) was 0.75 and decreased to 0.57 after HS. Diaphragm blood flow was 0.46 for LR, 0.79 for LR + DA, 0.95 for Hespan, and 1.2 for Hespan + DA. Hydrogen peroxide concentration as measured by mean fluorescence intensity with LR = 8.4 E6, LR + DA = 2.0 E6, Hespan = 5.4 E6, and Hespan + DA = 1.7 E6. Percentage of diaphragm apoptosis was 34.5 for LR, 1.2 for LR + DA, 11.7 for Hespan, and 6.8 for Hespan + DA. Conclusions: Administering LR alone failed to maintain DS equal to that at 60 min of HS. However, with the other fluids, DS was maintained or increased. After infusion of the fluid therapies, diaphragm blood flow changes mirrored the alterations in DS. Hydrogen peroxide concentration with respect to LR alone was significantly reduced with the addition of DA and Hespan. In addition, the percentages of diaphragm apoptosis with the administration of LR + DA, Hespan, and Hespan + DA were significantly less than when LR only was administered. jpierce@kumc.eduSponsored by: TSNRP Grant HU0001-05-1-TS11

RES21 Family Presence During Procedures in the Pediatric Intensive Care Unit

Tammy S. Robertson, Cathie Guzzetta, Kerry Shields; Children’s Medical Center, Dallas, TX

Purpose: (1) To determine whether a family presence protocol based on guidelines of the Emergency Nurses Association (ENA) facilitates uninterrupted patient care and appropriate family behavior at the bedside, and (2) to describe attitudes and experiences of family members and health care providers present during invasive procedures in the pediatric intensive care unit (PICU). Background: Studies from emergency departments have shown that family members need to be near the patient during procedures and resuscitation. Organizations including the ENA, the American Association of Critical-Care Nurses (AACN), and the American Heart Association have endorsed family presence (FP) during CPR or invasive procedures. Few studies have examined FP during procedures or in critical care areas, and none have explored the feasibility of applying ENA guidelines for FP. Methods: ENA guidelines for FP were used. This descriptive-exploratory study conducted in a 22-bed trauma/neurosurgical PICU surveyed 80 different family members and direct health care providers (HCP) with response rates 97% for families and 93% for HCP. Three data collection tools were used. The PICU Family Presence Protocol Data Collection Form was completed by the facilitator who supported families during FP. The PICU Family Presence Family Member Survey and the PICU Family Presence Healthcare Provider Survey were completed by families and staff, respectively. Means, frequency distributions, analyses of variance, t tests, correlations, and Cohen’s k were analyzed by using SAS 9.1.3. Results: Twenty-six families were eligible for FP experiences. In 2 cases, the family declined to stay and in another the physician did not agree to FP. Thirty procedures were performed on the 23 included patients. Family facilitators reported no disruptive behavior or interruption in care and commented that families helped the patient cope during the procedure. All parents believed they had a right to be present, and 97% thought that FP gave them peace of mind and would choose to be present again. All health care providers agreed that the outcome of the procedure was not affected by FP, and 87% supported FP during procedures. Ninety-two percent of HCP would agree to FP again. Conclusions: ENA guidelines include an appropriate screening tool for FP. With the support of a family facilitator, FP during procedures in the PICU is feasible and does not lead to interruptions in care. Family members believe they have a right to be present during procedures and HCP reported minimal or no effect on their practice with family present. Family presence is an effective way to promote patient- and family-centered care in the critical care setting.

RES22 Hygiene and Health: Can Baths and Oral Care Reduce Cardiac Surgery Infections?

Kathy L. Armstrong, Amanda Hessels, Margaret Janasie; Jersey Shore University Medical Center, Neptune, NJ

Purpose: The purpose of improving cardiac surgery sternal wound infections was identified as a postoperative complication that has affected morbidity, length of stay, and associated cost of care. One important component of patient preparation was identified as the preoperative compliance of patient bath and oral care the night and morning before the surgery. Infection prevention and control reviews a random sample of 25% of cardiac surgery cases that involve inpatients before surgery. Background: Jersey Shore University Medical Center (JSUMC) performs an average of 800 cardiac surgery cases annually. A goal of the cardiac surgery program is to decrease the incidence of sternal wound infections and the effects on overall patient outcomes. Surveillance data show that the rate of deep surgical site infections after cardiothoracic surgery (CTS) has not decreased. A review of the literature was performed to identify best practices in the CTS patients. Methods: An institutional assessment was performed. The relative importance and feasibility of interventions were evaluated. One practice is the pre-operative bath and oral care preparation with chlorhexidine. The CTS cases were reviewed by data for the first time and any association with sternal wound infections and bathing preparation practice was assessed. Omissions in either the actual bathing or nursing documentation that the patients were prepared with chlorhexidine preoperatively were identified. The evidence-based practice question was “Will administration of chlorhexidine baths and oral care preoperatively play a role in reducing the incidence of postoperative sternal infections?” Results: Based on the evidence in existing literature, product safety information, consultation with other hospital cardiac surgery programs and JSUMC’s ability to develop a standard of care, the standard of care for preparing CST patients was defined as follows: every inpatient will receive 1 bath and 1 oral rinse with chlorhexidine products at 9 PM the night before surgery and at 5 AM the day of surgery. The infection control team met with individual nursing units to identify barriers and solutions for the nurses providing the preoperative preparations. Conclusions: After several solutions were implemented (revision of standing order set, identification of the best times and dose of bath and oral care for patients, nurse education, development of a nursing care plan, and the creation of a patient education handout), an educational campaign was developed with scripting, poster board, and handouts to ensure consistency of the message. Five months after the plan to improve compliance with the preoperative preparation, compliance has improved from 21% to 100%.

RES23 Impact of Antimicrobial Products on the Infection Rate in the Cardiovascular Surgery Population

Kim B. Tindal, Cathy Moore; Gaston Memorial Hospital, Gastonia, NC

Purpose: To determine if the use of an antimicrobial dressing (AMD) on cardiovascular surgical wounds immediately after surgery would decrease our rate of hospital-associated surgical site infections. Background: Despite advances in infection control practices, surgical site infections cause prolonged hospital stays; increase costs for patients, providers, and hospitals; and contribute to increased morbidity and mortality. The Centers for Disease Control and Prevention estimates that 14% to 16% of hospital-associated (HA) infections are surgical site infections (SSIs). The estimated savings for just 1 infection is $25 546, not to mention improved patient outcomes. Methods: Baseline data were obtained from January 1, 2007, until July 31, 2007, by using chart review for all coronary artery bypass graft (CABG) chest and leg wounds. The evaluation period was the same months in 2008. The study population included all inpatients with surgical wounds classified as either clean (class I) or clean contaminated (class II) in whom a surgical infection developed within 30 days of the surgical date. In December 2007, we converted gauze, Telfa, Kerlix, etc, to the AMD product. This dressing is impregnated with a 0.2% concentration of mild antiseptic solution and is used as the primary skin dressing. By changing all the product in our facility, we made sure that compliance would be 100%. Results: The total number of SSIs decreased from 7 to 1, or 82%. The total number of procedures decreased from 127 to 101. The HA SSI infection rate per 100 targeted procedures decreased from 5.5 to 0.99. SSIs involving the leg decreased from 3 to 1, or 58.6%. The HA SSI infection rate in the leg per 100 targeted procedures decreased from 2.4 to 0.99. SSIs involving the chest decreased from 4 to 0, with the HA SSI infection rate in the chest per 100 targeted procedures decreasing from 3.1 to 0. Conclusions: Without a noticeable change in clinical practice, we were able to demonstrate an 82% reduction in our hospital-acquired infection rate from the baseline period to the evaluation period in our combined cardiovascular surgical wounds. Additional surgeries (cesarean sections, abdominal hysterectomies, and peripheral vascular surgeries) were also included in the study. The cumulative outcome was a 91% reduction in the hospitalwide HA SSI rate. We are currently changing to this product.

RES24 Impact of Critical Risk Factors on Symptoms of Posttraumatic Stress Disorder and Salivary Cortisol in Children in a Pediatric Intensive Care Unit: A Pilot Study

Rhonda M. Board, Northeastern University, Boston, MA

Purpose: To examine the contribution of 5 known, critically important risk factors on the psychological and neuroendocrine status of children admitted to a pediatric intensive care unit (PICU) and to explore the feasibility of testing the hypotheses in a full-scale study. Background: Research strongly supports that serious sequelae may occur in children following unusually stressful events such as violence or disaster. Researchers have examined posttraumatic stress disorder (PTSD) in children with physical disease and children undergoing medical treatments and found the variables most predictive of stress symptoms in children include parent distress, child acute distress, and degree of exposure to traumatic experiences. No published studies have examined PICU hospitalization as a primary traumatic experience for children, yet stress response can significantly influence children’s development and cause considerable damage to their psychological, biological, and social health. Methods: A prospective, correlational design was used to yield baseline hospitalization data, parental data, and child data, and 2 weeks and 3-month follow-up outcomes data. Families of 8 PICU hospitalized children aged 7 to 12 years were recruited from 2 large academic medical centers. Five risk factors measured were parental stress, parental anxiety, child anxiety, severity of child illness, and invasive procedures of child. Outcomes variables were child PTSD symptom severity and salivary cortisol levels. Descriptive statistics, repeated-measures analysis of variance methods, and case study were used to analyze the data. Results: Parental stress was low during hospitalization, but all were anxious during and after discharge. Mothers’ state anxiety significantly increased over time from baseline to 2 weeks (P = .009) and baseline to 3 months (P = .02). Child posttraumatic stress symptoms decreased from baseline to 2 weeks (P = .08) and then continued to decrease at the 3-month time period (P=.06). Most children with average or high anxiety had various degrees of PTSD symptoms whereas children with low anxiety had doubtful or mild PTSD symptoms. As the severity of PTSD symptoms increased over time, the level of salivary cortisol decreased for times 2 and 3 (P = .09, P = .08, respectively). Conclusions: Predicted trends in data were found. All variables warrant further investigation using similar methods in a full-scale study with an emphasis on recruiting the most seriously ill children. A thorough understanding of the PICU experience for all members of a family can facilitate appropriate screening of problems and timely support in order to improve quality of care.

RES25 Impact of Myocardial Injury in Patients After Subarachnoid Hemorrhage

Joyce Miketic, Marilyn Hravnak, Elizabeth Crago; University of Pittsburgh, Pittsburgh, PA

Purpose: To describe the prevalence of myocardial injury as quantified by elevated level of cardiac troponin I (cTnI) at >0.3 ng/mL within the first 5 days after aneurysmal subarachnoid hemorrhage (aSAH) and its impact on functional outcomes and mortality at 3 months. Background: Patients with aSAH suffer a primary brain injury at the time of an aneurysm rupture. Evidence indicates that they may also experience myocardial injury at the time of rupture due to a hypothesized catecholamine surge, but the prevalence of this problem and its impact on patients’ outcomes has not been well described. Methods: This prospective longitudinal study recruited 237 aSAH patients from 24 to 82 years old (mean, 54.5 years; SD, 11 years) with a Fisher grade >2 and/or Hunt/Hess grade = 3 who were admitted to the neurological intensive care unit. Serum level of cTnI was measured for 5 days after enrollment in the study. Patients dichotomized into myocardial injury (cTnI peak = 0.3 ng/mL) or no injury (cTnI < 0.3 ng/mL). Outcomes evaluated by interview at 3 months were patients’ perception of functional recovery measured by the Glasgow Outcome Scale (GOS) and functional disability by the Modified Rankin Scale (MRS) and family report of death. Descriptive, chi-square, and binary logistic regression analyses (SPSS v16.0) were done. Results: The level of cTnI was elevated in 43% of subjects, with few patients in either group having a medical history of cardiac disease (15.6% cTnI = 0.3 ng/mL vs 10.3% cTnI < 0.3 ng/mL, P = .25). A significant relationship existed between cTnI = 0.3ng/mL and bleed severity by Hunt/Hess, Fisher, and age. cTnI = 0.3 ng/mL was significantly related to poor outcomes by both GOS (P < .001) and MRS (P < .001). Forty patients died, and a significant association was detected between cTnI = 0.3 ng/mL and death (P < .001). A cTnI level of 0.3ng/ml remained a significant predictor of poor outcome by MRS (OR = 2.7, 95% CI, 1.3–5.9, P = .01) and GOS (OR = 2.2, 95% CI, 1.0–4.6, P = .01) after bleed severity and race were controlled for. Conclusions: Myocardial injury occurs commonly in patients without cardiac history after aSAH, and is associated with both bleed severity and poorer patient outcomes. Myocardial injury is also an independent predictor of poor outcomes even after other contributors such as race, age, and bleed severity are controlled for. Further study will determine the mechanistic link between myocardial injury and poorer aSAH outcomes. jkm10+@pitt.eduSponsored by: NHLBI R01HL074316.

RES26 Impact of the Use of Battery-Operated Toothbrushes for Intubated Patients on the Ease and Frequency of Oral Care

Nancy E. Brames, Donna Prentice, Linda Rimmer, Mariah Charles, Cathy Johns; Barnes-Jewish Hospital, St Louis, MO

Purpose: This study looked at the use of battery-operated toothbrushes and whether they are easier to use when performing oral care on intubated patients and if their use increases the frequency of tooth brushing compared with manual toothbrushes. Nurse satisfaction and preference of toothbrush type also were studied. Background: Previous studies have shown that brushing the teeth of intubated patients as part of an oral care protocol has significantly decreased the incidence of ventilator-associated pneumonia (VAP) in this population of patients. However, the ease of brushing the teeth of a patient with an endotracheal tube may affect the frequency with which this task is performed. Methods: A descriptive study using before and after surveys was conducted in 3 intensive care units (ICUs). The prestudy survey was completed by 113 registered nurses (RNs); of those, 77 (68%) returned the poststudy survey. Both surveys used a Likert-type scale. Battery-operated toothbrushes were given to the ICUs for use with intubated patients after return of the prestudy surveys. The toothbrushes were used for 4 months before a poststudy survey was completed. Descriptive statistics were used to examine the participants’ preferences in toothbrush type and ease of use of the battery-operated toothbrush. The t test was applied for group comparisons when appropriate to examine differences between groups. Results: Statistically significant increases in all questions regarding frequency, ease, effectiveness, and satisfaction were noted (P = .05). The study showed that 67.6% of RNs brushed teeth of intubated patients at least once per shift more than 60% of the time with a battery-operated toothbrush compared with 48.7% with manual toothbrushes. Battery-operated toothbrushes were rated effective to very effective by 60.8% of RNs compared with 11.5% for manual toothbrushes. RNs were satisfied to very satisfied with the battery-operated toothbrush at 63.5% and 11.4% with the manual. The battery-operated toothbrush was rated somewhat to much easier to use by 77% of RNs; 84.1% preferred battery-operated toothbrushes. Conclusions: RNs found that using a battery-operated toothbrush for oral care on intubated patients is easier than using a manual toothbrush. They also thought that the battery-operated toothbrush was more effective than the manual one. Providing a toothbrush that is easier to use and is viewed as more effective increases the frequency of oral care in intubated patients. Nurse satisfaction is significantly increased with the use of battery-operated toothbrushes.

RES27 Interrater Reliability of Japanese Version of Richmond Agitation-Sedation Scale in Various Patients in Intensive Care Units

Takeshi Unoki, Hideaki Sakuramoto, Aiko Okimura, Chiharu Takeshima, Yaeko Yanagisawa, Fumiko Tamura, Kazuhiro Aoki, Toshiaki Mochizuki, Norio Otani; St Luke’s College of Nursing, Tokyo, Japan

Purpose: To develop a Japanese version of the Richmond Agitation-Sedation Scale (RASS) and evaluate interrater reliability of the scale in various patients in intensive care units (ICUs). Background: It is essential to objectively assess sedation and/or agitation status to maintain patients’ comfort and safety. RASS is a relatively new sedation scale originally developed by Curtis N. Sessler and his colleagues in the United States and has been examined for its validity and reliability; however, no Japanese version of RASS has been validated. Methods: We developed a Japanese version of RASS by using the back-translation method. Original English-written RASS was translated into Japanese and a professional translator back-translated it to English. We compared original and back-translated RASS in terms of semantic equivalence, acceptability, comprehensibility, and appropriateness. This process was repeated 3 times. Finally, Sessler confirmed for us that both versions of RASS were linguistically equivalent in terms of clinical use. Then, paired evaluators simultaneously and independently evaluated depth of sedation using the Japanese RASS in convenience samples of ICU patients to evaluate interrater reliability. Results: Twenty-nine ICU patients were examined a total of 92 times by evaluator pairs, resulting in 184 observations. Thirty-one percent of observations were RASS 0, whereas 62% were less than 0 of RASS. Percentages of observations with patients sedated and receiving mechanical ventilation were 47% and 84%, respectively. Thirty-nine observations were in patients with neurological injury/disease. Agreement in all observations was excellent (weighted κ=0.8). In subgroup analysis, excellent agreement was seen in nonsedated patients (weighted κ = 0.8) and patients with neurological injury/disease (weighted κ = 0.9). Conclusion: The Japanese version of RASS was highly reliable in various ICU patients including nonsedated patients and patients with neurological injury/disease.

RES28 Little Kids + Little Bugs = Big Problems Eliminating Catheter-Associated Bloodstream Infections in Pediatric Critical Care

Rebecca N. Ellis, Kristi Ryan; Duke Children’s Hospital, Durham, NC

Purpose: Elimination of hospital-acquired infections was identified as a priority by national regulatory agencies and reimbursement companies. Catheter-associated bloodstream infections (CA-BSIs) increase hospital length of stay, cost, and morbidity and mortality. The National Association of Children’s Hospitals and Related Institutions (NACHRI) coordinated a group of 29 pediatric intensive care units (PICUs) and pediatric cardiac ICUs (PCICUs) in a collaborative to eliminate CA-BSIs. Background: Duke Children’s Hospital had a 16-bed combined PICU/PCICU that increased to 20 beds in April 2006. The average CA-BSI rate was 5.44 per 1000 catheter days for the 7 quarters preceding October 2006, including 31 CA-BSIs in the prior 11 months. Cost for a single CA-BSI was conservatively estimated to be $33 000 of increased hospital costs. In October 2006, the PICU/PCICU at Duke Children’s Hospital became a founding team in the NACHRI-led collaborative. Methods: Preimplementation baseline data were obtained from the infection control department. Project coordination involved data collection and management, education for all staff including evidence-based insertion and maintenance practice changes, and facilitating interdisciplinary collaboration. All central venous catheter insertions were observed to determine adherence of the practice standard. Each day, patients with central venous catheters were identified, including catheter location, continued need, and functionality. Patient care nurses completed random surveys, evaluating adherence to the maintenance bundle including daily review of catheters and access practices related to hand washing and hub scrub. Results: Standardization of the dressing change process was identified as having the biggest impact on the CA-BSI rate. The PICU/PCICU had an increase in number of days between CA-BSI from less than 30 days to 80 days. CA-BSIs were reduced from 32 in the 12 months before the collaborative began to 10 in the latest 12 months. For each CA-BSI, a multidisciplinary team root cause analysis was performed. Several potential contributory causes were identified, including nonadherence to the maintenance bundle. Use of central catheters remained unchanged both for total number of central venous catheters per month and the number of patients with a central venous catheter despite an increase in average daily census. Conclusions: Since the implementation, the PICU/PCICU has demonstrated a statistically significant decrease in CA-BSIs with a longer time between occurrences. Additionally, increased average daily census without change in the mean number of CVLs may demonstrate successful daily review of need and function of central venous catheters. This, in turn, leads to decreased opportunity to acquire a CA-BSI. The PICU/PCICU remains committed to decreasing CA-BSI to a goal of zero.

RES29 Multicenter Study of Bacteria on Reused Clean Electrocardiography Lead Wires: Are Monitored Patients At Risk For Nosocomial Infections?

Nancy M. Albert, Kelly Hancock, Susan Krajewski, Matthew Karafa, Karen Rice, Susan Fowler, Colleen Nadeau; Cleveland Clinic, Cleveland, OH

Purpose: To assess the presence of bacterial microorganisms and number of different microorganisms on 320 clean, but not currently used, electrocardiography lead wires (ECG-LW) sampled from surgical or medical critical care (CC), surgical or medical telemetry (Tel) units, emergency care department (ED) and operating rooms (OR) of 4 large hospitals (bed size of 481 to >1000) with magnet status and cleaning policies; and 2 urban teaching, 1 community teaching, and 1 community non-teaching hospital. Background: Reprocessing of reusable ECG-LW is a potential source of microorganisms capable of causing nosocomial infection in hospitalized patients. However, little is known about actual growth of pathogenic microorganisms on clean ECG-LW that are ready for use by incoming patients. Methods: Study teams, led by laboratory personnel, wore mask, gown, and gloves during swabbing, and carried out procedures in 1 day. The 24 bacterial species identified were grouped by risk for human infection: at risk (n = 9 bacteria), potential risk (n = 5 bacteria), and no risk (n = 10). Presence of bacteria models was generated by using a generalized estimating equation logistic model adjusting for multiple species on some ECG-LW. Number of bacteria species per ECG-LW was analyzed by Poisson regression model. Pairwise differences were used to determine differences in sites and units, and differences were Bonferroni corrected for the 6 comparisons, resulting in a significance criterion of P < .008. Results: Of organism growth, 226 bacteria were identified on 201 (63%) ECG-LW and varied by site from 49% to 80%. At risk or potential risk bacterial growths were found on 121 ECG-LW (37.8%; range 28.8%–43.8%). Both urban hospitals had less bacterial growth (all P = .01) and fewer bacterial species per ECG-LW (all P = .01) than both community hospitals; and the largest urban hospital had less at-risk growth and number of species than the other 3 sites (all P <.05). By clinical area, presence of any bacteria (P = .02) and number of bacteria species per ECG-LW (P = .002) differed with OR having less growth and number of species than other areas (both P <.02) and ED and Tel having more growth than CC. Conclusions: Reusable clean ECG-LW carry microorganisms that can potentially cause human nosocomial infection, especially in immunologically compromised patients or those with open wounds. Bacterial growth and number of species differed by hospital and clinical area. Further study is warranted to determine the rate of nosocomial infection from reusable ECG-LW, prevalence of resistant bacteria on ECG-LW, and potential association between cleaning policies and bacterial growth.

RES32 Oral Care Practices Survey for Orally Intubated Adult Critically Ill Patients

Laura L. Feider, Pamela Mitchell, Lori Loan, Betty Gallucci, Elizabeth Bridges; US Army, Madigan Army Medical Center, Tacoma, WA

Purpose: To describe oral care practices performed by critical care nurses for orally intubated critically ill patients and compare these practices with the 2005 AACN Procedure Manual for Critical Care and the Centers for Disease Control and Prevention (CDC) recommendations for oral care. Background: Ventilator-associated pneumonia (VAP) is a major threat to all patients who are mechanically ventilated. Oral care is one nursing intervention that targets VAP prevention. Oral care policies and practices vary from state to state, hospital to hospital, and even within intensive care units. Protocols guiding oral care may be inconsistent, impractical, or difficult to follow. The primary goal of oral care is to promote oral hygiene and thereby decrease oropharynx colonization, dental plaque colonization, and aspiration of colonized saliva. Methods: A descriptive, cross-sectional design with a Web-based survey was used to describe oral care practices reported by critical care nurses. The sampling target included any registered nurse who was a current 2006–2007 member of the AACN and was working in an adult critical care unit in the United States. A valid and reliable Web-based survey was created using face validity, content validity (97%), and test-retest reliability measures (r = 0.70–0.92). Dillman’s survey method was used. Three hundred forty-seven (17% response rate) randomly selected members of AACN membership completed a 31-item Web-based survey of oral care practices from November 1, 2006, to January 5, 2007. Results: Oral care was performed every 2 (50%) or 4 hours (42%), most commonly using foam swabs (97%). Oral care was reported as a high priority (47%). Nurses with more than 7 years of critical care experience performed oral care significantly more frequently (P = .008) than did nurses with less than 7 years of critical care experience. Nurses with a bachelor of science in nursing used foam swabs (P = .001), suctioned before endotracheal tube suctioning (P = .02), and suctioned after oral care (P < .001) more frequently when compared with nurses with an associate degree or diploma. Nurses whose intensive care units had an oral care policy (72%) reported that it called for using a toothbrush (48%), using toothpaste (32%), brushing with a foam swab (64%), using chlorhexidine gluconate oral rinse (38%), suctioning the oral cavity (70%), and assessing the oral cavity (60%). Conclusions: This is the first national critical care nurse survey of oral care practices using the AACN membership database. Additionally, this study compared reported practices against recommended practices and against the AACN/CDC recommendations. Results of this nationwide survey indicate that discrepancies exist between reported practices and policies. Oral care policies appear to be present, but not well used. Performing oral care is an essential nursing strategy for VAP prevention.

RES34 Precardioversion Transesophageal Echocardiography in the Electrophysiology Laboratory Immediately Before Cardioversion

Maureen L. Gagen, Michelle Ruhnke, Diane Walford, Kathy Furlong, Roberto Lang, Bradley Knight; University of Chicago Medical Center, Chicago, IL

Purpose: To determine if performing cardioversion and transesophageal procedures together results in a decrease in the total amount of sedation used for the total procedure or improvement in patient satisfaction. Background: A transesophageal echocardiogram (TEE) is often obtained before an elective cardioversion for atrial fibrillation to rule out an atrial thrombus. We recently implemented a system whereby patients scheduled for elective cardioversion undergo precardioversion TEEs in the electrophysiology laboratory before cardioversion rather than in the echocardiography laboratory elsewhere in the medical center. Methods: Twenty consecutive patients who underwent a TEE in the echocardiography laboratory and a cardioversion in the electrophysiology laboratory were identified. These patients were sedated for the TEE in the echocardiography laboratory and transferred to the electrophysiology laboratory for elective cardioversion after recovery in the echocardiography laboratory. In the electrophysiology laboratory, these patients underwent a second round of sedation before cardioversion and recovered in the electrophysiology laboratory before discharge (sequential group). These patients were compared with 20 additional patients who underwent a TEE in the electrophysiology laboratory immediately before cardioversion (simultaneous group). Patients who did not undergo cardioversion because an atrial thrombus was identified at the time of TEE were not included in this analysis. Results: Patients in the simultaneous group received less fentanyl (mean [SD], 121 [58] vs 149 [47] μg; P = .10) and midazolam (mean [SD], 8 [3] vs 9 [3] mg; P = .4) compared with patients in the sequential group, but the differences were not statistically significant. However, patients in the simultaneous group did have a significant reduction in total procedure time (mean [SD], 3.7 [1.3] vs 6.3 [1.7] hours; P <.001) every patient who underwent each approach preferred having the TEE in the electrophysiology laboratory immediately before cardioversion. Conclusions: A TEE immediately before cardioversion in the electrophysiology laboratory, rather than a TEE in the echocardiography laboratory followed by cardioversion in the electrophysiology laboratory, does not significantly reduce the total amount of sedation required, but significantly reduces the overall procedure time by 2.6 hours and is preferred by patients.

RES35 Quease Ease Aromatherapy for Treatment of Postoperative Nausea and Vomiting

Sheila Rhyne Reagan, Leanne King, Faye Clements; Gaston Memorial Hospital–CaroMont Health, Gastonia, NC

Purpose: To determine if patients undergoing total joint replacement would be willing to use an aromatherapy device to treat postoperative nausea and vomiting (PONV) and whether it would be useful. Background: PONV continues to be a major cause of patient dissatisfaction. The literature reports the incidence as between 10% and 60%. Critical Care Nurse has reported the use of medication as first-line treatment for nausea and vomiting, but with recognition that the use of alternative or complementary measures may improve patient outcomes. Little research was found on the use of complementary measures by older adults. Methods: Ninety-eight patients from one surgeon’s practice were recruited for the study. The study received approval of the hospital institutional review board. Quease Ease was the aromatherapy selected. It combines essential oils of peppermint, spearmint, lavender, and ginger in an enclosed delivery device. All patients received the same intraoperative antiemetics during surgery. Participants received the device in the postanesthesia care unit and were assessed for nausea during the first 24 postoperative hours. Nursing staff documented reports of nausea and use of Quease Ease by the patient on a nausea data collection tool that was developed for this study. Results: Data were analyzed by using descriptive statistics in an Excel spreadsheet. Nearly half (46%) of the 98 participants reported nausea in the first 24 hours after surgery. Thirty-nine participants (85%) who reported nausea used the Quease Ease device. Members of the health care team reported the use of Quease Ease lessened patient nausea, increased participation in therapy, was requested by patients and should be considered for the total joint protocol. Conclusions: The study provides evidence of willingness of patients undergoing total joint replacement surgery to use an aromatherapy device and beginning evidence of its efficacy. Further study is recommended to test definitively the effectiveness of Quease Ease in reducing nausea and vomiting.

RES36 Radiographic Verification of Bedside Feeding Tube Placement Using an Electromagnetic Guided Placement Device

Janice M. Powers, Terri Beeson, Tracy Spitzer, Michael Luebbehusen, Jamie Brown; St Vincent Hospital, Indianapolis, IN

Purpose: To compare the accuracy of an electromagnetically guided placement device for enteric tubes with abdominal radiography. Background: Currently, clinicians depend on radiographic verification to ensure accurate placement of enteric tubes placed at the bedside. This often results in delayed onset of enteric nutrition and increased cost. The use of radiography also exposes the patient to unnecessary radiation. Use of electromagnetically guided technology for bedside placement of a feeding tube may facilitate more accurate placement and reduced dependence on radiography. Methods: A sample of 200 subjects at 3 separate institutions were prospectively enrolled in this study. An experienced clinician trained in the use of an electromagnetic guided system placed and interpreted the location of the enteric tube based on the screen tracing. Abdominal radiographs were then completed with the use of barium contrast material to ensure accuracy of radiologic interpretation. The interpretations from the electromagnetically guided device as well as the radiographs were compared for accuracy. This comparison was undertaken by experienced clinicians and radiologists. Additional radiograph interpretation was completed by an independent radiologist. Results: Statistical analysis of the data will be performed by using Fisher exact tests, paired t test, and binary logistic regression. Results will include counts of accurate/inaccurate placement using the electromagnetically guided placement device when verified by abdominal radiography. Safety data will also be presented. The predictive value of sex, diagnosis, time, and staff on the safety and accuracy of placement will also be evaluated. Conclusions: At interim analysis, electromagnetically guided placement appears to be an accurate, safe, and timely means for placement of enteric tubes.

RES37 Reducing Indwelling Urinary Catheter Device Days and Catheter-Associated Urinary Tract Infections in a Medical Intensive Care Unit

Kathryn M. Killeen, Ellen Elpern, Omar Lateef; Rush University Medical Center, Chicago, IL

Purpose: To implement and evaluate a multidisciplinary initiative to reduce catheter-associated urinary tract infections (CAUTIs) in a medical intensive care unit (MICU) by decreasing use of urinary catheters. Background: Indwelling urinary catheters are used commonly in ICUs. CAUTIs increase with duration of catheter use, with risk estimated to be at least 5% per day. Strategies to prevent CAUTIs have focused on catheter materials, drainage systems, insertion techniques, and use of anti-infectives. Among all methods investigated, the most important intervention to prevent CAUTIs is limiting catheter use. Urinary infections in the critically ill can increase length of ICU stay and mortality. Methods: Indications for the maintenance of indwelling urinary catheters were developed by a team of critical care clinicians. For a 6-month intervention period, patients in an MICU with indwelling urinary catheters were evaluated daily by using criteria for appropriate catheter continuance. Recommendations were made to discontinue indwelling urinary catheters in patients who did not meet appropriateness criteria. Urinary catheter device days and rates of CAUTIs during the intervention were compared with those of the previous 11 months. Unpaired t tests were used to determine statistical significance with a P value less than .05 considered significant. Results: During the 6-month intervention, 337 patients with indwelling urinary catheters were encountered, for a total of 1432 urinary catheter device days. Overall, 456 of the 1432 device days (32%) were considered inappropriate. Reasons for catheter continuation against recommendation were incontinence, particularly in female patients, and concern for skin integrity. With use of guidelines, urinary catheter device days were reduced to a mean of 238.6 days per month from the preintervention rate of 311.7 catheter days per month (P = .01). No CAUTIs occurred in the 6-month intervention period compared with a mean preintervention monthly rate of 4.4 CAUTIs per 1000 device days (P < .001). Conclusions: CAUTIs are among complications fundamentally linked to nursing care and will most likely constitute a measure of nursing care performance. Our results confirm that urinary catheter device days and CAUTIs can be reduced by daily determinations by nurses of the need for the catheter.

RES38 Secondary Traumatic Stress, Burnout, Compassion Fatigue, and Compassion Satisfaction in Trauma Nurses

Mary Antoinette Murray, Kathryn Von Rueden, Theresa Logan, Katie Simmons, Mary Betsy Kramer, Erica Brown, Sara Hake, Rebecca Gilmore, Karen McQuillan, Melissa Madsen; University of Maryland Medical Center, Baltimore, MD

Purpose: To determine the incidence of secondary traumatic stress (STS) and its relationship to burnout (BO), compassion fatigue (CF), and compassion satisfaction (CS) in nurses who primarily care for trauma patients. Secondary aims included describing the associations of those factors with personal characteristics, exposure to trauma variables, and coping strategies. Background: BO and CF may be associated with work-related exposure to extremely stressful events, such as those experienced by nurses who work in trauma centers or emergency departments. Frequent exposure to seriously injured patients or death and associated STS has been studied in the military population, but little evidence exists related to the cumulative effect of such stressors in civilian nurses who care for trauma patients on a daily or near-daily basis. Methods: The Professional Quality of Life (ProQOL), Penn Inventory (PI), and a demographic/behavioral survey were distributed to 262 nurses from all units in an urban all-trauma hospital. ProQOL, a 30-item self-administered tool, is used to assess BO, CF, and CS. PI, a 26-item self-administered tool, measures the presence of STS. The demographic/behavioral survey included questions about work experience, support systems, and coping strategies. Completed surveys were returned anonymously via drop boxes. Descriptive statistics were used to describe the frequency of STS, BO, CF, and CS. Pearson correlations were used to analyze relationships between these scores and nurse demographic and behavioral characteristics. Results: Surveys were returned by 128 nurses (49%); of those, 9 (7%) had PI scores indicative of STS. Those with STS reported BO (r = .55, P<.001) and CF (r=.42, P<.001). BO and CF scores were correlated (r = .60, P < .001). Three variables correlated with BO and CF: increased hours/shift (BO r =.25, P =.005; CF r =.25, P=.006); use of medicinals (BO r=.31, P=.001; CF r=.25, P=.006); and weaker coworker relationships (BO r=.40, P<.001; CF r=.31, P=.001). Years in trauma nursing, education, and age were not related to BO or CF. Those with CS did not report STS, BO, or CF. CS correlated (P=.001) with lower education level, meditation use, greater strength of supports, and strong coworker relationships. Conclusions: Although a majority of respondents did not report BO, CF, or STS, the data show that relationships exist between BO, CF, STS, and hours working/shift, coping strategies, and support systems. Social supports may be an important factor in managing stress and CS in nurses who primarily care for trauma patients. Further research is required to examine these relationships and explore the predictive value of behavioral/demographic variables on the development of BO, CF, and STS.

RES40 Standardizing Oral Care Practice Within Intensive Care Units in an Academic Teaching Center

Charles C. Reed, Wen Pao, Josephina Cochetti, Christie Harper, Randy Beadle; University Hospital, San Antonio, TX

Purpose: To determine current baseline oral care practices on intensive care unit (ICU) patients receiving mechanical ventilation at a level I trauma center. Second, to understand the nurses’ definition and perceptions of performing oral care. Last, to ascertain how nurses learned to perform oral care, the frequency with which oral care is provided, and barriers to oral care. Background: Ventilator-associated pneumonia (VAP) is the leading cause of morbidity and mortality in the ICU, according to the US Agency for Health Research and Quality. Recent research has shown the oral cavity is the primary source of respiratory-related infection and proper oral care contributes to the reduction in VAP. In our organization, no oral care protocol or education activity exist, and interviews with nurses revealed a lack of consensus regarding proper oral assessment and care. Methods: IRB approval was obtained. Approximately 200 ICU nurses at University Hospital were invited to participate in the study. A paper/pencil survey tool was distributed to all participants. The Oral Care in Ventilated Population Questionnaire (OCVPQ) developed a team of ICU nurses after an extensive literature review. It included 4 areas of focus: attitude, assessment, practice, and barriers/tools. The questions included open ended, yes/no, multiple choice, and scaled responses. The survey tool was anonymous and confidential. Completed surveys were deposited into a sealed box in each nurse’s lounge. Boxes were collected by the research team 2 weeks after distribution of the survey tool. Results: 97% of respondents indicated oral care was part of their routine practice and cited prevention of VAP as the foremost reason to perform oral care. 76% rated it as “very important” to patient outcomes. Nurses reported learning oral care primarily by experience, with some crediting nursing schools. Frequency of oral care varied from every 2 to 12 hours, with the average being every 4 hours. Fifty percent indicated that they assessed the patient’s mouth every hour. Oral care tools ranged from a washcloth, toothbrush and tooth paste, to prepackaged kits. Barriers identified to performing oral care included patient’s agitation, lack of cooperation, acuity, physical limitation, and nursing time constraints. Conclusions: Our survey revealed inconsistencies in oral care practices that could lead to higher incidences of VAP. Essential elements to consider in improving effective practice are education and development of an evidence-based guideline and standardized assessment tool. Further research on patient outcomes related to implementation of an evidence-based guideline and assessment tool needs to be studied.

RES41 ST-Map Electrocardiographic Software Improves Nurses’ Use of and Attitude Toward Ischemia Monitoring and the Quality of Patient Care

Marjorie Funk, Prasama Sangkachand, Jennifer Phung, Julie Gaither, Angela Mercurio, Mary Jahrsdoerfer, Noreen Gorero, Brenda Sarosario, Francine LoRusso; Yale University School of Nursing, New Haven, CT

Purpose: Evidence suggests that nurses do not activate the ST-segment monitoring feature on the bedside monitor because they perceive it to be difficult to use. ST-Map electrocardiographic (ECG) software was designed to make ST-segment ischemia monitoring easier by incorporating graphical displays of ongoing ischemia. Our purpose was to determine if nurses’ use of and attitude toward ischemia monitoring and the quality of patient care related to ECG monitoring improve with the availability of ST-Map software. Background: Studies show that although 80% to 90% of transient ischemic events are asymptomatic, they are significant markers for adverse outcomes. Nurses should activate continuous ST-segment monitoring to identify patients with acute, but often silent, myocardial ischemia. The American Heart Association/AACN Practice Standards for ECG Monitoring recommend ST-segment monitoring for all patients at significant risk for myocardial ischemia that, if sustained, may result in acute myocardial infarction (MI) or extension of an MI. Methods: This 1-group pre/post intervention study of 61 staff nurses and 202 patients with acute coronary syndrome was conducted in the cardiac intensive care unit at Yale-New Haven Hospital. We obtained baseline data on nurses’ use of and attitude toward ischemia monitoring and the quality of patient care. We then provided education on ischemia monitoring and the ST-Map software, and the ST-Map software was installed on all bedside monitors. Nurses used the new ST-Map software for 4 months. We then obtained follow-up data on the same outcomes we examined at baseline. We used the McNemar test (nurse data) and chi-square and t test (patient data) to determine changes with the availability of ST-Map software. Results: The sample of 61 nurses was 93% female, with a mean age of 41 years. Before ST-Map was instituted, only 13% of the nurses had ever used ST-segment monitoring vs 90% after ST-Map (P < .001). The most common reason for not using ST-segment monitoring before ST-Map was inadequate knowledge (62%). The most common reason for liking ST-segment monitoring after ST-Map was knowing when a patient has ischemia (80%). The sample of 202 patients was 73% male, with a mean age of 62 years. Time to acquisition of a 12-lead ECG in response to symptoms or ST-segment changes before ST-Map was 5 to 15 minutes vs always <5 minutes after ST-Map (P < .001). There was no difference in time to return to the cardiac catheterization laboratory. Conclusions: The new ST-Map ischemia monitoring software was associated with more frequent use of ST-segment monitoring and improved attitudes of nurses toward it. It was also associated with a shorter time to the acquisition of a 12-lead ECG in response to symptoms or ST-segment changes. Additional research with larger samples is needed to examine the association of ST-Map with patient outcomes. Evaluation of ST-Map in other patient care settings and with broader patient populations is also indicated.

RES42 SUGAR: Assessing the Suitability of Capillary Blood Glucose Analysis in Patients Receiving Vasopressors

Myra F. Ellis, Whitney Knouse, Allen Cadavero, Faresha Webber, Debra Farrell, Kesi Benjamin, Josephine Macmang, Joseph Libutan, Helen Shearin, Tonda Thomas; Duke University Hospital, Durham, NC

Purpose: To determine the relationship between vasopressor use in critically ill postoperative patients and the accuracy of capillary blood glucose (CBG) analysis. Background: Tight glycemic control in critically ill patients decreases infection and mortality. Studies have shown that CBG measurements are accurate in normotensive patients and correlate with measurements from arterial samples. However, patients receiving vasopressors may be normotensive by arterial blood pressure, but have altered peripheral perfusion. Since altered peripheral perfusion may affect the accuracy of CBG measurements, actual best practice for this patient population is not known. Methods: A convenience sample (n = 50) of adult postoperative cardiothoracic patients on insulin and vasopressors were prospectively enrolled. Demographics and significant medical history were abstracted from the medical record. Prospective data collection occurred every 6 hours while patients received an insulin infusion and included pharmacotherapy, vital signs, and 3 blood samples obtained simultaneously from arterial and peripheral sites. Samples were tested by using point-of-care technology and the central laboratory. The quality of peripheral perfusion was recorded by using a standardized scale. One-way repeated-measures analysis of variance was used to analyze differences in sets of blood glucose values within each time point. Results: We found a statistically significant difference in blood glucose levels measured in samples drawn from both capillary sticks and arterial/venous sites (central catheters) by using a point-of-care testing device as compared with the blood glucose levels obtained from the clinical laboratory (P < .001 for both). However, no difference was found in the point-of-care glucose measure from the capillary sites as compared with central ports (P = .88). In addition, we found that vasopressive medications (eg, dopamine, epinephrine, norepinephrine) were significantly correlated with differences in glucose values (P < .001), and that patients on fewer vasopressive medications have less variance in glucose measures. Conclusions: Our findings suggest that although point-of-care testing offers valid and reliable trends in glucose measures for arterial, venous, and capillary blood samples, point-of-care testing does differ significantly from clinical lab testing, and the 2 methods should not be used interchangeably for intravenous insulin titration. In addition, patients receiving vasopressive medications are more likely to have variances in blood glucose testing.

RES43 Survey Says: Nursing Values and Behaviors With End-of-Life Care in the Intensive Care Unit

Meg Zomorodi, Kristina Riemen, Mary Lynn; University of North Carolina at Chapel Hill School of Nursing, Chapel Hill, NC

Purpose: To develop an instrument to measure nursing values and behaviors when providing end-of-life care. Background: Although the intensive care unit (ICU) is typically viewed as a critical and life-saving environment, many patients receive end-of-life care there. For nurses whose day-to-day practice is focused on saving lives, the transition from intensive care to end-of-life care can be difficult. Nurses are in a pivotal position to improve care for dying patients and their families by redefining the perspective of ICU care and challenging current practice. Methods: This study consisted of 3 phases. Phase I consisted of item development from a content analysis of the literature and qualitative interviews. Phase II consisted of content validity assessment and pilot testing. Phase III consisted of field testing, factor analysis, and reliability estimation. Results: Items generated in phase I were evaluated in phase II by content experts (n = 8) and pilot participants (n = 12), and 2 instruments were developed. In phase III, the Values of Intensive Care Nurses for End-of-Life (INTEL-Values) was subjected to an exploratory factor analysis (n = 695) and a 4-factor model was selected. The Behaviors of Intensive Care Nurses for End-of-Life (INTEL-Behaviors) was also examined through a factor analysis (n = 682), and a 2-factor model was selected. Reliability testing of both instruments during a 2-week period yielded low kappa values (.05–.40), although the Pearson correlations (0.68–0.81) and intraclass correlation coefficients were high (0.65–0.81). Conclusions: The INTEL-Values was problematic in terms of item-to-item correlations and test-retest reliability. This might be partially attributable to the recognized difficulty in measuring attitudes. The INTEL-Behaviors had higher factor loadings, possibly because behaviors are more concrete. Future work will consist of continued refinement of the instruments and construct validity testing. Nurses who participated in this study stressed the importance of continued work in this area.

RES44 Temporal Artery Scanning Falls Short as a Secondary, Noninvasive Thermometry Method for Trauma Patients

Kathleen A. Marable; Grant Medical Center, Columbus, OH

Purpose: To determine whether temporal artery (TA) scanning thermometry could be an accurate, noninvasive backup method of thermometry in patients with these types of traumatic injury. Background: Oral and mandibular trauma pose barriers to oral thermometry. Methods: We compared 3 techniques of TA scanning, axillary, and oral thermometry in critical care patients. This study was a prospective, nonrandomized study in which eligible participants served as their own controls. We obtained and recorded the body temperature of each patient 1 time using each of 5 different methods: (1) sublingual oral temperature measured with a correctly calibrated oral thermometer probe; (2) axillary temperature; (3) temporal scanning by drawing the probe across the forehead and ending behind the ear; (4) temporal scanning, by forehead scanning alone; and (5) temporal scanning, using the behind-the-ear position alone. Results: The maximum acceptable average difference between any 2 methods was selected at 0.5°F (2.5°C). Under the assumption that the standard deviation (SD) between any 2 devices would be 1.1°F (0.625°C) and with selected type I error of 5% and type II error of 10%, the minimum required sample size was 68 patients. For each comparison, Bland-Altman analysis was performed. Bland-Altman analysis corrects for situations in which correlation is high but agreement is poor, and it provides an estimate of the variability in the differences between 2 methods. Linear regression was used to examine whether the differences between any 2 methods varied systematically according to the patient’s underlying oral temperature. For each comparison of alternative method to oral thermometry, the proportion of patients with a difference of 0.5°F was tabulated. The paired t test was used to compare the average absolute difference in temperature between oral and axillary thermometry to the average absolute difference between each TA scanning method and oral thermometry. Finally, a McNemar test was performed to assess whether one method was more likely than another method to classify a patient as having a fever. Conclusions: Our results indicate that TA scanning methods were, at best, comparable to axillary measurements. In addition, the performance of the TA scanners varied with body mass index, whereas axillary readings did not.

RES45 The Beach Chair Position in the Intensive Care Unit: An Evaluation of Outcomes

Kelly Anne Caraviello, Lynne Nemeth; Medical University of South Carolina, Charleston, SC

Purpose: Bed rest in critically ill patients leads to many complications. The beach chair position (BCP) has been used to promote lung expansion, but research has not yet validated the effectiveness of this positioning technique. This study evaluates lung compliance and expansion of critical care patients placed in the BCP and hypothesizes a reduction in ventilator-associated pneumonia (VAP) rates, length of stay (LOS) in the intensive care unit (ICU), and ventilator days for patients placed in the BCP. Background: The lungs expand more efficiently in an upright position. VAP is the leading cause of death among patients with hospital-acquired infections and increases patients’ ICU stays and the hospital bill. Research shows the majority of nurses are aware that upright positioning helps prevent VAP. Current guidelines related to the prevention of VAP advocate semirecumbent positioning, but limited research is available that has formally linked the effects of the BCP on VAP outcomes. Methods: A total of 200 ICU patients who meet the inclusion criteria are placed in the BCP 4 times daily using the Hill-Rom bed. VAP rates, ventilator days, hospital LOS, and ICU LOS are collected using administrative data for study patients and a retrospective cohort. Chi-square analyses are done to explore associations of the dichotomous variables: VAP and BCP positioning. The continuous variables ICU LOS and ventilator days are examined by using pooled t tests. Further examination of the relationships of the main outcome variables use linear regression for continuous variables and logistic regression for categorical variables, adjusting for demographic variables: age, sex, race, and unit. Results: Study enrollment of the 200 patients is expected to be complete in December 2008. Preliminary data collected from 129 patients to date suggest a reduction in the rate of VAP but the entire data set is not yet complete, nor is it appropriate at this time to speculate. Results will be available by the time of NTI presentation. Conclusions: The concept of early mobilization of ventilated and sedated patients is a relatively new idea. Positive outcomes that may be demonstrated by using the BCP may lead to this positioning technique becoming a daily critical care nursing practice. One day, hearing the question, “Have you put your patient into the beach chair position today?” may be regarded with the same importance as today’s commitment to turning patients every 2 hours and having the head of the bed at 30° elevation.

RES46 The Impact of a Comprehensive Oral Hygiene Program on the Development of Ventilator-Associated Pneumonia

Erin Sarsfield, Nancy Villanueva; Penn State Hershey Medical Center, Hershey, PA

Purpose: To determine what effect the initiation of a comprehensive oral hygiene program (COHP) composed of brushing the teeth every 12 hours, cleaning the teeth and gums every 4 hours, and applying mouth moisturizing gel every 6 hours had on the rate of ventilator-associated pneumonia (VAP) for critically ill patients receiving mechanical ventilation in the neuroscience and medical intensive care units (NSICUs and MICUs). Background: Patients receiving mechanical ventilation are at increased risk of VAP developing. VAP increases morbidity, mortality, length of stay, and cost. Interventions making a positive impact on VAP have been identified. One nursing intervention is the quality and consistency of oral care. The oral care policy for critically ill patients at our institution was revised to include standardized mouth care procedures and supplies. The impact this change had on nursing practice and the incidence of VAP was evaluated. Methods: A quantitative study design measuring VAP rates before and after the intervention was used. Chart reviews were conducted on patients having positive respiratory nosocomial infection markers in the 6 months before and after implementation of the COHP. The oral care policy was revised and in-service education was conducted. Charts were reviewed to determine if the patient met inclusion criteria. For those meeting the criteria, the determination of VAP was made by using the flow diagram from the Centers for Disease Control and Prevention’s National Nosocomial Infections Surveillance System. The VAP rate was calculated for the 6 months before and after initiation of the COHP. A 2-tailed t test was performed by using SPSS Version 15. Data from the NSICU and MICU were analyzed separately and in combination. Results: The VAP rate for the NSICU was 20.07 before the intervention and 7.17 after the intervention. A 2-tailed t test showed statistical significance at P = .01. For the MICU, the VAP rate was 11.15 before the intervention and 5.32 after the intervention. Analysis revealed no statistically significant difference between these rates (P = .08). Combining the units revealed a statistically significant difference between the rates before and after the intervention (P = .004). Conclusions: The COHP intervention made a significant difference in improving VAP rates in NSICU and in the combined units. Although positive trends were produced in the MICU VAP rate, the difference was not statistically significant. This finding may be attributed in part to the fact that differences between rates before and after the intervention were not as great in the MICU as in the NSICU. These results provide strong evidence to support the COHP intervention and suggest that more work is required to further reduce VAP rates.

RES47 Tight Glycemic Control: The Effectiveness of a Computerized Insulin Dose Calculator

Cheryl Dumont, Rhonda Kiracofe, Cyril Barch, Leigh Ann Sabbagh, Mary David, Nicole Ryder, Kim Rudy, Bonnie Harvey, Ruth Wenzel, Lisa Dellinger, Darlene Louzonis; Winchester Medical Center, Winchester, VA

Purpose: To examine the effects of using a computerized insulin dosing tool to facilitate tight glycemic control. The research questions are as follows: (1) Is there a difference in tight glycemic control related to nurses’ use of a computerized dose calculator versus a paper protocol? (2) Is there a difference in nurses’ satisfaction with the process of tight glycemic control related to the their use of a computerized dose calculator versus a paper protocol? Background: Tight glycemic control is important to patients’ outcomes, but the process of tight glycemic control is risk prone and labor intensive for nurses. Technology offers a computerized dose calculator for intravenous insulin dosing that may improve glycemic control. No randomized controlled trials have examined the effects of nurses’ use of a computerized dose calculator compared with a paper protocol for glycemic control, and nurses’ satisfaction with the process has not been examined. Methods: This is an ongoing prospective randomized controlled trial of the safety and efficacy of a computerized dose calculator (EndoTool) compared with a paper protocol for intravenous insulin dosing. Glycemic control is being measured in milligrams per deciliter for blood glucose, percent measures in target range, time to reach target, and number of hypoglycemic events. The sample will be 300 intensive care patients: 150 randomized to the computerized dose calculator and 150 in the control group who will receive usual care with the paper protocol. Nurses providing care for these patients will be a cohort of ICU nurses who usually work in the study units and will be responsible for the patients’ total care. Results: An analysis of 42 patients demonstrated that the mean blood glucose was 145 mg/dL with EndoTool versus 154 mg/dL with the paper protocol. Mean (SD) time to target was 3 (1.5) hours with EndoTool and 5 (6) hours with the paper protocol. Measures in the target range were 63% with EndoTool versus 55% with the paper protocol, and no hypoglycemic episodes (<40 mg/dL) occurred with EndoTool; 4 occurred with the paper protocol. These results did not reach statistical significance. Nurse satisfaction (n = 24), on a scale of 1 to 10—with 1 meaning very unsatisfied and 10 meaning very satisfied—was 8.4 (1.6) with EndoTool and 4.8 (2.5) with the paper protocol (t = −4.959, P < .001). Conclusions: The sample size is currently not large enough to make conclusions. We are also collecting data on our patients’ outcomes for future analysis of the relationships between glucose control and outcomes. We intend to continue to collect data to achieve a sample size of 300 patients and plan to survey another 30 nurses. At this time, the nurses are very happy with the EndoTool as a dose calculator for intravenous insulin.

RES48 Universal Violence Precautions to Manage Violence From Patients and Visitors

Gordon L. Gillespie II, Patricia Kunz Howard, Donna Gates, Margaret Miller; University of Cincinnati College of Nursing, Cincinnati, OH

Purpose: To describe the prevention and management of workplace violence that occurs in a pediatric emergency department (ED). Two aims of this study were (1) to identify strategies to prevent the occurrence of workplace violence against health care workers in a pediatric ED, and (2) to identify strategies that could be implemented to provide support to the worker during or immediately following a violent event when the worker is the target of patient or visitor violence. Background: Workplace violence is 4 times more likely to occur in a health care setting than in all private industry combined. Although workplace violence in the ED setting has been recognized for a number of years, no studies have depicted best practice to manage violent events from the perspective of ED workers. As a result, it was important to study the problem of workplace violence against ED workers by using qualitative interviews to explore potential solutions. Methods: Interviews were conducted individually with 31 workers (physicians, nurses, respiratory therapists, child life specialists, paramedics, and patient care attendants) in a Mid-western pediatric ED. Interviews were transcribed verbatim, yielding 690 pages of transcripts. Interview data were analyzed by using a modified constant comparative analysis method. In an effort to triangulate the interview data, 40 hours of direct observations were conducted in addition to analyzing 499 organizational policies, all educational opportunities, intranet announcements, and intranet news stories during an 18-month period. Results: Intervention strategies were identified at the primary, secondary, and tertiary prevention levels. Primary intervention strategies included better control of visitor access to treatment areas and mandating training of all ED direct care providers. Secondary intervention strategies included using deescalation techniques, setting limits on unacceptable behaviors, and intervening when necessary to stop violent events. Tertiary intervention strategies included providing self-care, holding informal debriefings, taking a break, attending to any physical injuries of the patient or workers, completing safety event reports, and, for extreme violence, filing police reports. Conclusions: ED workers are knowledgeable about intervention strategies that may reduce the number of violent events as well as moderate the adverse effects for ED workers who experience violent events. Research is needed to test the effectiveness of the strategies. To increase the chance for success, bedside care-givers must be involved in both the development and implementation of those strategies.

RES49 Use of the Bispectral Index Monitor When Extubating Cardiac Surgery Patients

Linda L. Henry, Alan Speir, Lisa Martin, Jennifer Anderson, Linda Halpin, Niv Ad, Sharon Hunt, Janice White; Inova Heart and Vascular Institute, Falls Church, VA

Purpose: Frequently, open heart patients are intubated on arrival in the intensive care unit (ICU), where the nurse assumes responsibility for extubation. Nurses rely on experience and an extubation protocol to determine a patient’s readiness to extubate. The bispectral (BIS) monitor assesses a patient’s level of mental arousal and awareness while he or she is anesthetized or in the postoperative phase while he or she is sedated. This study was done to determine if the BIS might facilitate waking and earlier extubation of open heart surgery patients. Background: Data indicate that rapid extubation after cardiac surgery reduces morbidity and mortality. BIS monitoring has been used to assess sedation in patients undergoing procedures while receiving anesthesia or for continuously sedated and neurologically compromised ICU patients. In these situations, the goal is a state of unawareness to prevent recall. Few data exist on using the BIS monitor in the arousal phase following surgical procedures to facilitate the extubation of intubated patients. Methods: In a matched case control study; 30 prospective, stable patients returning to the ICU with a BIS monitor were matched 1:1 to pre-BIS patients on age, sex, type of surgery, status on arrival in the ICU, and surgeon. The data collected included the age and sex of the patient, the type of surgery, the patient’s body temperature, pH, carbon dioxide level on arrival/extubation, and the total amount of propofol and pain medication received before extubation, BIS, and RASS scores. Descriptive statistics were used to describe the groups. For the continuous data, t tests were used, and for categorical data, a chi-square test was used to assess differences between the groups. Multiple regression was used to determine which variables were predictive of time to extubation. Results: A total of 25 BIS patients were matched on selected criteria to 25 pre-BIS patients (N=50). The majority of the patients were male (78%), received coronary artery bypass surgery (90%), had a mean age of 63.6 years, and were extubated an average of 5 hours 50 minutes after surgery. Chi-square tests indicated no differences between groups for sex and type of surgery (P > .05). Results of t tests indicated no differences between groups for age, amount of propofol and pain medication received, and time to extubation (P > .05). A significant regression equation was found (F4,52 = 12.79, P < .001), with an R2 of 0.496. Total propofol, total hydromorphone, and age were significant predictors of time to extubation. Conclusions: The BIS monitor did not appear to facilitate waking and earlier extubation for this group of patients at our institution. Efforts to reduce the time to extubation may need to focus on reviewing current protocols for the use of propofol and pain medication before extubation in patients whose condition is stable after cardiac surgery.

RES50 Working in an eICU Unit: Life in the Box

Trudi B. Stafford, Mary Myers, Anne Young, Janet Foster, Jeffrey Huber; University of Pittsburgh Medical Center–Passavant, Pittsburgh, PA

Purpose: This ethnographic study of the VISICU eICU (Baltimore, Maryland) work environment in a large Midwestern health care system describes everyday life working in telemedicine intensive care. Background: The eICU telemedicine model of care uses technology to provide intensivist-driven care in settings without bedside intensivist coverage. Previous studies of the eICU model of care mainly focus on quantitative elements and evaluate specific clinical outcomes. This study examined the way such units function. Methods: Data were gathered through 60 hours of observation and formal interviews of eClinician team members. Thirteen eNurses, 3 ePhysicians, and 1 IT Systems Analyst participated in semistructured interviews, and 27 additional eClinicians participated in the field study. Years of clinical experience and experience in critical care ranged from 5 years to more than 30 years. Results: Findings indicated that the eICU work environment is like working in an air traffic control center. eClinicians work at computer screens and monitor multiple ICU patients. The eClinician has access to information that is not always readily available to the bedside team. The eClinician provides this information and recommendations for interactions to the bedside team that has hands-on control to change the course of events. Effective communication and interactions between the eClinicians and the bedside team are critical to the success of this practice model. Conclusions: The eICU model of care is a practical way to provide experienced ICU nurses and intensivists to supplement the bedside team. The work environment provides a way for eNurses to continue to use their critical thinking skills and ICU experience in a setting with fewer physical demands than bedside ICU nursing. The ePhysicians find value in the eICU model of care from a patient safety and cost avoidance perspective but admit that the ideal care model includes an intensivist at the bedside. Further study is needed to describe the eICU care model from the perspective of the bedside ICU team. This perspective is needed in order to determine how to develop appropriate protocols, policies, communication plans, and practices that will ensure ongoing effective collaboration between the 2 entities.


Presented at the AACN National Teaching Institute in New Orleans, Louisiana, 2009.