RS2 Multidisciplinary Development and Implementation of a Blood Conservation Program Among Coronary Artery Bypass Patients

Linda Henry, Linda Halpin, Sari D. Holmes, Alan Speir, Elmer Choi, David Fitzgerald, Anthony Rongione, Niv Ad; Inova Heart and Vascular Institute, Falls Church, VA


To examine the effect of blood product use on patients’ outcomes after isolated first-time coronary artery bypass graft (CABG) and to determine if a multidisciplinary blood conservation program resulted in reduced use of blood products and any associated cost savings for the program.


According to the 2009 National Society of Thoracic Surgeons (STS) database, 59% of all CABG patients receive blood products. Receipt of blood products has been associated with increased morbidity, mortality, and decreased quality of life beyond hospitalization.


Our institutional STS database (overseen and managed by our advanced practice specialist) was used retrospectively to track use of blood products, operative mortality, morbidity, and readmissions between January 2005 and December 2009 (N = 3061). In 2007, a criterion-driven algorithm for blood transfusion was developed though a multidisciplinary team of clinical nurses, anesthesiologists, cardiac surgeons, and physician assistants. Change in blood product use was examined, and the associated costs were determined.


Overall, patients who received any blood product whether intraoperatively or postoperatively were at 8 times greater risk for operative mortality and more likely to have prolonged ventilation, pneumonia, renal failure, and stroke (P < .001). Neither 30-day readmissions nor rates of sternal wound infection differed significantly between patients who received blood products and those who did not. From 2005 to 2009, intraoperative/postoperative use of blood products decreased from 47.8% to 15.2% (P < .001). The associated cost savings thus far has amounted to more than $1 million.


Results indicate that patients who received blood products had worse outcomes after CABG. Implementation of a collaborative multidisciplinary blood conservation program dramatically decreased blood use and yielded a significant cost savings for the hospital. Efforts should be made nationally to use evidence-based protocols and standardize perfusion techniques to decrease blood use during cardiac surgery.

RS5 An Interactive Online Education Program Improves Nurses’ Knowledge of Electrocardiographic Monitoring: Early Findings of the PULSE Trial

Marjorie Funk, Catherine G. Winkler, Kimberly Stephens, Jeanine L. May, Kristopher P. Fennie, Leonie Rose, Yasemin Turkman, Barbara J. Drew; Yale University School of Nursing, New Haven, CT


This analysis of data from the Practical Use of the Latest Standards for Electrocardiography (PULSE) trial was done to evaluate whether nurses’ knowledge of electrocardiographic (ECG) monitoring improved after completion of a novel interactive online educational program based on the American Heart Association–AACN Practice Standards for ECG Monitoring.


Despite advances in ECG monitoring technology, monitoring practices are inconsistent and often inadequate. It is unclear whether this inadequacy is partly due to knowledge deficits of nurses. We designed the PULSE trial to evaluate the effect of implementing the practice standards on nurses’ knowledge, quality of care, and patients’ outcomes. We are reporting the analysis of nurses’ knowledge after nurses in hospitals randomized to the experimental group completed the online education intervention.


The PULSE trial is a 5-year multisite randomized clinical trial. The sample included nurses on adult cardiac units in 17 hospitals (15 in the United States, 1 in Canada, 1 in China). After nurses completed a 20-item knowledge test that was developed, pilot tested, and revised on the basis of an item analysis, hospitals were randomized to the experimental or control group. Nurses in the experimental group underwent the online education program that covered essentials of ECG monitoring and arrhythmia, ischemia, and QT interval monitoring. After completing the program, the nurses retook the knowledge test. Nurses in control group hospitals retook the knowledge test, but did not have the online education.


The sample of 2544 nurses was 89% female and 75% white, with a mean age of 38 years; 72% had a bachelor of science degree in nursing or higher. At baseline, the mean score was 48.2 (SD, 11.9) out of a possible 100. After the intervention in the experimental group, the mean score of nurses in the control group was 49.2 (SD, 11.6) and the mean score of nurses in the experimental group was 70.2 (SD, 15.5). The experimental group improved significantly more than the control group (t = −29.03; P < .001). Of the 4 subsections of the test, essentials of ECG monitoring had the highest mean posttest score (76.3; SD, 17.8) and ischemia monitoring had the lowest score (54.4; SD, 28.8) among nurses in the experimental group.


Although knowledge test scores improved significantly after the online ECG monitoring education, whether this translates into improvements in the quality of care related to ECG monitoring and patients’ outcomes remains to be determined. We are in the process of collecting data on these outcomes. In the final phase of the study, we will examine if improvements seen immediately after the intervention are sustained.

RS21 End-Tidal Carbon Dioxide As a Physiological Measure of Response To Clustered Nursing Interventions in Neurological Patients Receiving Mechanical Ventilation

Laura Genzler, Sue Sendelbach, Pamela Jo Johnson, Sarah Parangakis; Abbott Northwestern Hospital, Minneapolis, MN


To examine the physiological stress response to clustered nursing interventions in neurological patients receiving mechanical ventilation. We sought to determine in this pilot study whether clustering of nursing interventions has a detrimental impact on neurological patients.


Physiological stress increases metabolism and the level of carbon dioxide, which acts as a vasodilator, potentially increasing intracranial pressure (ICP). Nurses cluster patient care activities to allow patients maximal rest between interventions. Although most nursing guidelines recommend clustering nursing care activities to minimize patients’ stress, little evidence describes the stress response. Carbon dioxide levels may be useful for monitoring changes in cerebral blood flow. Changes in end-tidal carbon dioxide (etco2) level may help assess the number of care activities a patient can tolerate before cerebral blood flow is affected.


A convenience sample of 15 patients with a neurological diagnosis who were receiving mechanical ventilation was used to examine the effect of clustered care on stress response. Nurses recorded start/stop times of care, types of care, vital signs and etco2 4 to 6 times in a 24-hour period. Demographic data were collected to describe the sample. Stress response was defined as a 10% change in etco2 and was calculated as percentage change in etco2 from before the care to 5 minutes after the care activity was started. Care clustering was defined as providing >6 care activities in a single nursing interaction. Means and percentages were compared by clustering status. Analysis included chi-square tests for categorical and t tests for continuous variables.


The sample comprised 7 men and 8 women from 18 to 92 years of age (mean, 54.3 years; SD, 21.7 years). In total, 70 total nursing care interactions were observed. Mean number of interactions per patient was 4.6, and mean care activities per interaction was 6.1 (range, 3–10). Of 62 interactions with complete data, 61.3% (n = 38) were clustered care. Mean percentage change in etco2 at 5 minutes differed significantly for patients with clustered care compared with patients without clustered care (6.6% vs 0.01%; P = .001). Patients with clustered care were significantly more likely than patients with low clustering to exhibit a stress response at 5 minutes (23.7% vs 0%; P = .01).


Ventilator-dependent neurological patients who received >6 clustered cares experienced a higher mean change in etco2 than did patients who received <6 clustered cares. Contrary to nursing guidelines that recommend clustered care to reduce stress, our findings suggest that providing fewer cares at 1 nursing interaction may minimize induced stress. Further research should examine changes in etco2 across various levels of clustering and its effect on patients’ outcomes.

RS26 Impact of Canine-Assisted Ambulation on Congestive Heart Failure Patients’ Ambulation Outcomes and Satisfaction

Samantha Abate, Michele Zucconi, Bruce Alan Boxer; SJ Healthcare Regional Medical Center, Vineland, NJ


To combine ambulation and animal-assisted therapy (AAT) in patients with congestive heart failure (CHF) and document benefits with sound data. Three outcome measures were identified: decreasing the number of patients who refuse to walk, increasing the distance that patients walk, and assessing patients’ satisfaction with having the chance to walk with a therapy dog.


CHF is a leading cause of inpatient admissions and health care expenditures. Early ambulation decreases both hospital length of stay and readmission rates. However, patients often refuse to walk for a variety of reasons, thus losing the benefits of this simple intervention. Although AAT is a safe, low-cost, and effective adjunct to the usual plan of care for cardiac patients, statistically significant research findings in support of AAT are sparse.


CHF patients are approached by a specially trained restorative aide (part of a CHF program already in place) and asked if they would like to walk, and their response is recorded. All patients, irrespective of their initial answer, are then offered the chance to walk with the therapy dog. Patients who deny allergy or fear of dogs walk alongside the therapy dog for as long a distance as they are willing or able to tolerate canine-assisted ambulation (CAA). Distance walked is calculated, in steps, by a pedometer. After ambulation, patients are surveyed for their satisfaction with the experience. Study data were analyzed and compared with a randomly selected historical sample drawn from existing records of CHF patients.


Significant improvements in the number of patients who refuse ambulation (P < .001), and in distance ambulated (P < .001) were seen when CAA was incorporated in the standard ambulation process. A 537-patient historical CHF population had an ambulation refusal rate of 28%. When offered the chance to participate in CAA, 7.2% of the study population refused ambulation. Of the 69 patients in the study sample, 13 initially refused ambulation and then agreed when CAA was offered (P < .001). Distance ambulated increased from 120.2 steps in a randomly selected, stratified historical sample to 235.07 in the CAA study sample (P <.001). Patients were unanimously satisfied with CAA.


This study has shown CAA to be a safe and effective addition to an existing early ambulation program for CHF patients. By encouraging early ambulation, CAA has the potential to decrease hospital length of stay and thus decrease the high costs of CHF care. Although the change was not statistically significant (P = .20), length of stay decreased by 1 day in the study population. Future research examining the benefits of CAA in other populations of patients and in various settings is warranted.

RS1 A Comparison of Patient’s Self-Report of Pain to the Nonverbal Pain Scale

Shawn Cosper, Carol Hinkle, Bettina Riley, Cindy Briner; Brookwood Medical Center, Birmingham, AL

To validate a nonverbal pain assessment tool by comparing the nonverbal pain score determined by the nurse with the patient’s self-report of pain.

Nurses caring for patients who are unable to self-report their pain, such as cognitively impaired or ventilator-dependent patients who are sedated, use observational rating scales to access their patients’ pain. Observational rating scales were designed for pain assessment in adult patients who are unable to self-report pain. The Nonverbal Pain Scale (NVPS) is an observational rating scale designed for use with patients who are unable to self-report their pain. The NVPS has not been directly compared with patients’ self-reports of pain.

A convenience sample of patients was recruited from 5 intensive care units (ICUs) in a >500-bed medical center. The patients were randomly selected by a lottery system. Inclusion criteria were that the nurse could not have ever taken care of the patient, the patient must be 19 years old or older, the patient must be alert and able to communicate verbally or nonverbally, and the patient must have been in the unit for at least 4 hours before assessment. Patients who had a behavioral health diagnosis, were in another research study, or had other physiological conditions (eg, hemodynamically significant arrhythmias) in the preceding 4 hours were excluded. One hundred forty-five paired assessments were used in the data analysis and were evenly distributed among the 5 ICUs.

The Wilcoxon signed-rank test was used for data analysis. Of the 145 cases, 46 pairs of comparisons (32%) involved no pain being assessed by the nurse using the NVPS and the patient reporting no pain. However, nurses’ assessment of pain using the NVPS differed from patients’ self-report of pain in 99 comparisons (68%). Of the comparisons where pain was reported, the NVPS yielded significantly lower scores than the patient self-report, Z(99) = −7.01, P < .001, where the NVPS mean was 1.99 and the patient self-report mean was 4.29.

The scores nurses obtained differed significantly from the NVPS. Of the patients who reported pain, 1 in 4 patients had a total score difference of 4 or more in their self-report of pain as compared with the NVPS. This has implications for nursing practice because the NVPS underestimated the patient’s self-report of pain. Use of the NVPS should be limited to those cases where patients are unable to report their pain verbally or by other methods (eg, alert patients receiving mechanical ventilation and using a chart to indicate their pain score). Nurses should consider that the patient might be experiencing pain even when their behaviors do not indicate it. The patient’s self-report of pain remains the best indicator for pain assessment in critically ill patients.

Diane Counts, Mary Acosta, Holly Batten, Eileen Foos, Kim Hays-Ponder, Linda Hearon, Olga Macairan, Linda Thomas, Maryse Whitsett, Lori Williams, Elizabeth Twiss; Munroe Regional Medical Center, Ocala, FL

To examine the level of agreement between measurements of body temperature obtained with disposable and nondisposable electronic thermometers in critically ill patients.

Emphasis on infection prevention has increased the use of disposable medical equipment for each patient. Few data are available on the clinical accuracy of digital, disposable oral temperature devices.

A method-comparison study design was used to examine the agreement between a disposable electronic thermometer (Medichoice, Measure Technology Co, Wuxi City, Jiangsu Province, China) and a nondisposable, oral electronic thermometer, the clinical reference standard for noninvasive temperature measurement. Temperatures were taken once with both devices in a convenience sample of critically ill patients. Bias and precision were calculated to quantify the differences between the 2 devices, with data graphed in the Bland Altman method. The percentage of temperatures obtained with the disposable electronic thermometer that were >0.5ºC and 1.0ºC higher or lower than the clinical reference temperatures also was calculated.

A total of 48 critically ill patients were studied for 2 months. Temperatures ranged from 35.4ºC to 39.1ºC, averaging 36.8ºC (SD, 0.8ºC). Bias and precision for the disposable device was −0.26ºC (SD, 0.56ºC). The number of temperatures measured with the disposable, oral thermometer that were >0.5ºC and >1.0ºC above or below the clinical reference temperatures were 10 (21%) and 6 (13%), respectively.

Experts recommend that 95% of temperatures obtained with a temperature device that is to be used clinically as a substitute for core temperatures in hospitalized patients should be within 0.5ºC of the clinical reference standard. We found that the disposable, oral electronic thermometer did not meet that recommendation, with more than 20% of temperatures. As this is the first study to clinically evaluate this device, additional studies are needed, particularly in critically ill patients with abnormally high and low temperatures.

Debra Ryan, Barb Rogers, Kristen Simpson, Kim Miksa, Sue Ward; Spectrum Health, Grand Rapids, MI

To compare oxygen saturations measured via pulse oximetry (Spo2) at different sensor sites (ear, forehead, finger), connected to 2 different manufacturers’ processing computers, with results of patients’ arterial blood gas analysis. The results of this study were anticipated to assist nurses in selecting the most accurate sensor location and type of equipment to be used for noninvasive monitoring of oxygen saturation.

A common method for assessing the oxygenation status of patients is the use of pulse oximetry. The Spo2 sensor is typically placed on a finger. Specially designed sensors may also be applied to sites such as the earlobe or forehead in situations of poor peripheral perfusion or when the hand is not available. Studies conducted on the accuracy of sensor sites are limited and often involved only small numbers of healthy volunteers, making generalization to critically ill patients difficult.

A method-comparison study design was used to compare Spo2 readings from a disposable finger sensor, a nondisposable ear sensor, and a nondisposable forehead sensor with oxygen saturation (Sao2) results from arterial blood gas analysis. Six Spo2 readings and an arterial blood gas analysis were obtained from 38 hemodynamically stable, adult medical critical care patients. The order of readings was randomly assigned. The Bland-Altman method was used to calculate and graph bias and precision scores between the oxygen values. Analysis of variance was used to determine differences between variables (sensor location; type of processing computer and demographics). The level of significance for all tests was set at P < .05.

Of 40 patients enrolled, 2 were excluded from analysis; the sample size of 38 was based on a power analysis with an effect size of 0.35. Using a 3 (sensor site) × 2 (manufacturer) between-subjects factorial analysis of variance, we found no statistically significant differences in oxygen values—either from main effects (manufacturer or sensor site) or from an interaction effect of manufacturer and sensor site. Post hoc analyses of demographic characteristics also did not show significant differences. Bland-Altman graphs showed that the disposable finger sensors appeared to have the least variability, but, on average, all 3 sensors provided reliable readings.

It appears that neither the manufacturer nor the sensor site has a significant effect on difference scores between oximetry devices and corrected oxygen saturations from arterial blood gas analysis. All the sensors provided reliable readings; however, the ear and forehead sensors require more maintenance to ensure continued accuracy. This study involved only adult patients in stable condition in a medical critical care unit. Future research should include patients in less stable condition, other populations of intensive care patients and longer duration of sensor use.

Mae Pasquale; Cedar Crest College, Allentown, PA

(1) To examine the effects of family presence during trauma resuscitation on outcomes of anxiety, satisfaction, and adaptation in family members who were present and not present during trauma resuscitation efforts and (2) to determine if prior stressors, severity of the patient’s injury, or family demographics influence family outcomes in family members present and not present during trauma resuscitation efforts.

Trauma resulting in critical injury occurs suddenly and unexpectedly, and the impact on the families of persons injured can be immense. Although attention is appropriately focused on the initial care of the critically injured patient, attending to the needs of the patient’s family must also be considered a priority. Allowing family members the option to be present during trauma resuscitation efforts may promote positive outcomes, but health care providers have significant concerns about this practice.

A prospective, multivariate, comparative design was used. Within 48 hours of admission to the intensive care unit, adult family members of critically injured patients admitted to a level I trauma center in Northeastern Pennsylvania were asked to participate. The Resiliency Model of Family Stress, Adjustment, and Adaptation guided the selection of variables. Prior stressors were measured with the Family Inventory of Life Events. Severity of injury was calculated by using the Injury Severity Score. Anxiety, satisfaction, and adaptation were measured by the State-Trait Anxiety Inventory, a modified version of the Critical Care Family Needs Inventory, and the Family Member Well-being Index, respectively.

A sample of 50 family members of 38 critically injured adult patients participated. Prior stressors or severity of injury did not influence anxiety, satisfaction, or adaptation. Male family members had more positive adaptation than female members in the not present group (t23 = −2.38, P < .05). Relationship to the patient affected anxiety; spouses who were present during resuscitative efforts had significantly higher anxiety scores (mean, 50.71; SD, 12.67) than parents had (mean, 31.83; SD, 16.10). The 2 groups did not differ significantly in anxiety, satisfaction, and adaptation; family members who were present reported lower anxiety scores than sis family members who were not present.

No other published study has investigated outcomes of family presence during trauma resuscitation within 48 hours of admission. Scores for anxiety, satisfaction, and adaptation were equivalent for family members who were present and family members who were not present. Excluding family members from trauma resuscitation efforts does not seem to be warranted at this time. Studies with larger samples and in various populations, as well as experimental studies with longitudinal follow-up are needed.

Jennifer Bond, Kathy Lee, Sherry Robinson; Memorial Medical Center, Springfield, IL

To explore nurses’ perceptions about delirium, including risks, symptoms, diagnosis, and interdisciplinary communication. The study built on our previous quantitative work on a 15-bed medical-surgical intensive care unit (ICU) at a Midwestern university-affiliated Magnet hospital, which examined recognition of symptoms of delirium by nursing and medical staff. This qualitative study was designed to provide baseline data for the development of a delirium education program.

Delirium is a serious clinical syndrome that affects 21% to 73% of ICU patients. Timely recognition of delirium and establishing a diagnosis are essential to providing positive outcomes for critically ill patients. In our previous study, physicians documented the diagnosis of delirium in only 9% (3/33) of delirious patients. Nurses documented delirium symptoms in 94% (31/33) of the same patients, but they did not correlate symptoms to arrive at a diagnosis and rarely notified the physician.

This qualitative exploratory study used focus group methods. Saturation of the data was accomplished in 3 focus group sessions. After signing a consent form, participants selected pseudonyms to ensure confidentiality. The moderator used the same interview guide for each focus group, and interviews were audio taped and transcribed. Sample interview questions included the following: What kinds of behaviors make you suspect that your patient is experiencing delirium? What do you think interferes with the detection of delirium? How do you communicate and document the presence of delirium? Data was analyzed by using the grounded theory method.

Fifteen nurses were interviewed, including 2 males and 13 females with a mean age of 42 years. Participants acknowledged barriers and facilitators to delirium recognition and identified that recognition heightens awareness of potential adverse outcomes associated with delirium. Barriers included system barriers, uncertainties, assumptions, knowledge deficits, inconsistent communication, and lack of symptom assimilation. Findings also identified that even when nurses recognize delirium, it is communicated to physicians with ambiguity. Nurses hesitate to call with cognitive and behavior changes and expressed discomfort in labeling patients as delirious without a physician diagnosis.

Findings from this study further support results of our previous quantitative study, indicating that nurses know symptoms of delirium but inconsistency in physician communication interferes with diagnosis of delirium. Many barriers hamper the recognition of delirium, but through identification of these barriers, a need for objective assessment methods was realized. Our findings were used to develop a delirium interdisciplinary plan of care and an education program on the Confusion Assessment Method for the Intensive Care Unit for all nursing staff.

Melissa Browning, Lynn Richter; Rush University Medical Center, Chicago, IL

To determine if a higher glycemic target prevents hypoglycemia and to assess what impact a higher glycemic target has on blood glucose variability.

Tight glycemic control has been demonstrated to affect patients’ outcomes from critical illness and is now an established new standard of care. Because of the consequences of hypoglycemia, higher glycemic targets have been implemented, yet this may result in glycemic variability, which also increases mortality rates.

A descriptive study design was used to assess episodes of hypoglycemia, target blood glucose levels, and hyperglycemia before and after implementation of a nurse-led initiative with the use of an insulin infusion protocol with a higher glycemic target. Blood glucose levels were reviewed on 140 patients in a 6-month period in 4 intensive care units (ICUs) at a Midwestern university-affiliated medical center. Data review focused on the surgical ICU, which used the most insulin infusions.

For all ICUs, hypoglycemia decreased after an intravenous insulin protocol with a higher glycemic target was implemented. A total of 68 episodes of hypoglycemia occurred in the 3 months before the new protocol was implemented, compared with 19 episodes of hypoglycemia in the 3 months after the protocol was implemented. In the surgical ICU, the amount of hyperglycemia increased after the new protocol was implemented. Blood glucose levels were greater than 160 mg/dL 16.6% of the time in the 3 months before and 39.7% in the 3 months after the protocol was implemented. By using a higher glycemic target, more blood glucose levels fell in the target range. In the surgical ICU, blood gas levels were in the target range 40% of the time in the 3 months before and 48% of the time in the 3 months after the protocol was implemented.

A nurse-led initiative targeting glycemic control with the use of higher glycemic targets resulted in fewer episodes of hypoglycemia and increased the percentage of glucose levels in the target range. Focusing on glycemic control in the ICU remains an important area of nursing care to ensure best outcomes for patients.

Leanne King, Faye Clements, Patricia Suggs, Sheila Reagan, Lisa Medlin, Donya Harding; Gaston Memorial Hospital, Gastonia, NC

Evidence to support warming cabinet temperatures for heating cotton blankets is lacking. Agencies guiding accrediting bodies set an initial requirement for a maximum cabinet temperature of 110ºF (43ºC), which was recently increased to 130ºF (54ºC) with no supportive evidence published. This study provides evidence by answering the question: What is the actual temperature of a blanket warmed to 110ºF and 150ºF (66ºC) immediately upon removal from the warmer, after 30 seconds, after 60 seconds, and with the blanket unfolded?

Warmed blankets provide comfort and thermodynamic regulation for patients. The clinical guidelines of the American Society of Perianesthesia Nurses recommend passive insulation to promote normothermia. Before 2005, cabinets were heated to maximum temperatures recommended by the manufacturers. In 2005, the ECRI Institute issued a recommendation to standardize cabinet temperatures to 110ºF (43ºC) to prevent burns. That recommendation was based on a 1947 study in which tissue was constantly exposed to higher temperatures and a concern that fluids would be heated with blankets.

The quasi-experimental study was conducted by using the same model of warming cabinet on 3 inpatient units. Temperature of blankets was measured on 6 different days by 3 investigators who used the same procedure and equipment. All temperatures were measured in degrees Fahrenheit. The sample size was 136 blankets from a cabinet set at 110ºF (43ºC) for group A and 134 blankets from a cabinet set at 150ºF (66ºC) for group B. Temperatures were measured at the innermost fold of the blanket. Temperatures measurements obtained with an infrared thermometer were recorded for each blanket immediately upon removal from the cabinet, at 30 and 60 seconds after removal from the cabinet, and with the blanket unfolded. Data were analyzed with SPSS 17.

Groups A and B showed the highest temperature when measured immediately upon removal from the warmer, which decreased with each subsequent measurement over time. Group B had the highest recorded blanket temperature of 142ºF (61ºC) with a temperature of 130.3ºF (54.7ºC) at 30 seconds, 120ºF (49ºC) at 60 seconds, and 101.5ºF (38.6ºC) unfolded. The mean temperature of blankets in group A was 101ºF (SD, 4.04; 38ºC). The mean temperature of group B was 128ºF (SD, 6.56ºF; 53ºC). The highest temperature of a blanket in the study group was 142ºF (61ºC) when heated to 150ºF (66ºC) as per the manufacturer’s maximum temperature recommendation. Blanket temperature decreased exponentially with each subsequent recording.

No occurrences of burn injuries related to heated cotton blankets have been documented in our facility. Based on the experience before 2005 and findings of this study, the temperatures of the cabinets have been returned to the maximum recommended settings from the manufacturer. Patients now have warm blankets. Nurses must continue to challenge agencies that guide recommendations for patient care to provide evidence to support those recommendations.

Shoshana Arai, Jennifer McAdam, Kathleen A. Puntillo; University of California San Francisco, San Francisco, CA

An exploratory randomized pilot study was conducted to evaluate if anxiety and stress levels in patients’ family members could be reduced by their active participation in nonpharmacological bedside interventions to ameliorate the patient’s thirst or pain symptoms. The purpose of this secondary analysis was to examine the beneficial effect of attention from a health care provider on a family member’s anxiety and stress.

Studies on family members of patients in intensive care units (ICUs) indicate that family members may experience anxiety and acute stress. The unexpected hospitalization of their loved one may precipitate a state of crisis, potentially triggering short- and long-term anxiety in family members that may put them at risk for posttraumatic stress disorder. Some family members have wanted to be involved with their loved ones’ care. Active involvement with patient care interventions may help to alleviate family members’ anxiety or stress.

Family members (n = 39) of ICU patients who reported thirst or pain were randomized to either control or intervention groups. Family members who participated were primarily wives without a history of anxiety. Family members (n = 20) were coached to provide either a thirst or a pain intervention or, if in the control group (n = 19), to be observed for a similar 30-minute period. All family members spent time with a research nurse completing pretest and posttest surveys about anxiety. Multilevel regression analysis was performed to assess the family members’ state anxiety scores on the Spielberger State Trait Anxiety Inventory (norm, 35) and the Acute Stress Disorder Scale (norm, <56).

Both groups of family members had high pretest state anxiety scores: 40.9 (control) vs 41.2 (intervention) with posttest scores that declined to near normative levels of 36.8 (control) to 33.6 (intervention). Pretest scores on the Acute Stress Disorder Scale were 39.8 (control) vs 40.4 (intervention) points and declined to 37.1 (control) vs 36.7 (intervention) points after the session. Even though these scores were elevated, they were below the at-risk score for the development of posttraumatic stress disorder. Although the differences were not statistically significant, all the decreases in anxiety and stress scores after the session occurred in both groups of family members, despite the short interval between pretests and posttests and regardless of their group assignment.

The reduction of ICU family members’ anxiety and acute distress scores in the control and intervention group suggests a possible beneficial effect of health care providers’ focused attention to the family members’ high levels of stress and anxiety. Family members’ well-being may benefit from measures that address their distress levels, value their presence at the bedside, and support their active participation in the patient’s care. Sponsored by: National Palliative Care Research Center.

Lora Ott, Marilyn Hravnak, Sunday Clark, Nikhil B. Amesur; University of Pittsburgh, Pittsburgh, PA

To describe the characteristics of inpatients who experienced instability requiring a call to the medical emergency team while in the radiology department (RD-MET) and explore the characteristics associated with their outcomes after RD-MET.

Patients are at risk for their condition becoming unstable while they are outside of their usual care area, particularly in the radiology department. One rescue intervention is RD-MET activation to bring a team of critical care providers to the radiology department. Little is known about MET activation in the radiology department. Enabling nurses to know more about the antecedents of RD-MET could lead to earlier detection of instability, improve patient outcomes, inform interventions to prevent the need for RD-MET, and potentially alter systems of care in the radiology department.

All RD-MET activations for 1 year (January 1, 2009–December 31, 2009) of inpatients at least 18 years old at a tertiary care center with a well-established MET system were retrospectively reviewed. Patients were identified from the hospital’s MET database. Patients’ characteristics before RD-MET (age, sex, race, comorbid conditions [Charlson Index], admitting diagnoses, unit of origin) and outcomes after RD-MET were obtained from electronic medical records. Patients were classified as having a poor outcome after RD-MET if they required a higher level of care (increased respiratory or cardiac support, emergent procedure, transfer to a higher acuity care unit) or died before discharge.

The study sample (n = 64) was 52% female, 89% white, and had a mean (SD) age of 61 (19) years. Admitting diagnoses were neurological (20%), cardiovascular (16%), and abdominal (16%). The most common comorbid conditions were chronic obstructive pulmonary disease (23%) and diabetes (20%). The total score on the Charlson Comorbidity Index had a mean (SD) of 4.6 (2.8). Most RD-MET inpatients were from a general care unit (48%), and 56% required preexisting oxygen support (33% by nasal cannulae). After RD-MET, 61% of patients required a higher level of care, and 22% did not survive to discharge (3% of those died during the MET). Patients with preexisting comorbid conditions were more likely to have poor outcomes after RD-MET (P = .001).

Of patients who experienced RD-MET, 1 in 5 had neurological diagnoses and 1 in 6 had cardiovascular or abdominal diagnoses. About two-thirds of RD-MET patients required a higher level of care afterward, and almost one-quarter died before discharge. Further study is needed to understand the mechanisms whereby patients deteriorate in the radiology department and how to improve systems of care, including the education and availability of radiology department staff, to better detect and support patients in unstable condition in advance of a MET call.

Cynthia Chernecky; Medical Colleges of Georgia, Augusta, GA

(1) Evaluate in vitro differences in colony-forming units (CFUs) with 4 different bacteria over 4 days using 5 different needleless intravenous catheter connectors: 1 positive, 3 negative, and 1 zero displacement connector. (2) Evaluate the best connector’s occlusion rates in multiple clinical settings. (3) Compare 2 antibacterial needleless intravenous connectors: 1 silver coated and 1 with chlorhexidine/silver ion engineering, and the best nonantibacterial needleless connector from previous research.

Four pathogens are responsible for 60% of intraluminal catheter-related bloodstream infections (CR-BSIs): Staphylococcus epidermidis, S aureus, Pseudomonas aeruginosa, and Escherichia coli. Cost is $225 million/year and 200 000 intensive care unit days per year. Manufacturers of positive-displacement connectors received an alert and notification letter from the Food and Drug Administration in July (2010) regarding the need to prove that positive-displacement connectors do not cause blood stream infections. Research has shown that both positive- and negative-displacement connectors are associated with CR-BSIs. Additionally, negative-displacement connectors are associated with increased occlusions that lead to CR-BSIs. Theoretically, the new silver-coated and ion-engineered technologies of needleless connectors promote antibacterial activity. However, once blood contacts the silver coating, antibacterial effectiveness may be lost, which may not happen with the ion-engineered connector. Therefore, researching comparative technologies for bacterial growth patterns is necessary to refine nursing care and decrease CR-BSI incidence, particularly with immunocompromised patients.

An independent laboratory, Nelson Laboratories, Inc (Salt Lake City, Utah), tested the different needleless connectors, 20 connectors of each type with 6 controls, each day for 4 days under identical laboratory conditions. Each connector was swabbed, inoculated with a minimum of 105 of a pooled specimen of 4 different bacterial organisms (Staphylococcus epidermidis, Staphylococcus aureus, Pseudomonas aeruginosa, and Escherichia coli). Appropriate equipment, reagents, media, and safety were employed. Repeated-measures analysis of variance was used to examine differences between connectors over time (P = .05; Bonferroni post hoc testing determined specific group differences).

RyMed Technologies InVision-Plus (nonantibacterial) had the best overall performance at reducing the number of CFUs for all the pathological organisms compared with the other connectors; B-D Q-Syte had the worst overall performance. CareFusion/Medegen MaxPlus Clear and ICU Medical MicroClave were both inconsistent in the number of CFUs between growth days; Hospira Lifeshield TKO/Clave had consistently high CFU amounts. The silver-coated Baxter V-Link (antibacterial) connector had up to 200 times more bacteria than RyMed Technologies’ InVision-Plus (nonantibacterial) and InVision-Plus CS with chlorhexidine/silver ions (antibacterial) connectors regardless of bacteria type. These findings demonstrate that antibacterial and nonantibacterial needleless connectors differ on CFU counts in vitro, which increases the probability for CR-BSIs in patients.

The positive- and negative-displacement connectors and the one silver-coated needleless connector were not effective in controlling bacterial growth. Only the nonantibacterial and antibacterial with chlorhexidine/silver ion engineering needleless connectors exhibited no consistent CFU counts for all 4 bacteria over all 4 days. The nonantibacterial zero-fluid-displacement needleless connector in oncology clinical settings decreased occlusion rates between 20% and 84% without other changes made to patient care methods.

Deborah Kadich, Michelle Nellett, Mary Gregory, Ivy Balanlayos, Cheryl Lefaiver; Advocate Christ Medical Center, Oak Lawn, IL

To compare the effectiveness of gastric feeding vs postpyloric feeding in cardiovascular surgical patients. The aim was to compare the amount of time to reach the prescribed goal in milliliters per hour and the total amount of calories per 24-hour period.

Cardiovascular surgery and advanced age may increase surgical risk because of nutritional compromise. Inadequate nutrition can lead to infection, poor wound healing, increased ventilator days, and increased length of stay. Documentation on gastric feeding indicated the patients were not meeting their feeding goal. Increasing nutrition in the cardiac patients would result in better outcomes, and a comparison of the feeding tubes was needed to determine which tube would benefit the patient more.

This retrospective study was conducted with chart review. The convenience sample, size of 30 in each group estimated by power analysis, consisted of all patients in the cardiovascular surgical heart unit who had received enteral feeding in the past 2 years. Data collected included age, sex, medical diagnosis, day inserted, type of feeding tube radiographic confirmation, formula, goal rate, hours to goal, volume per 24 hours, calories per 24 hours, whether feeding was withheld, and reason for feeding being withheld. Comparisons between the amount of time to reach the goal feeding volume and the total number of calories per 24-hour period were done with independent t test.

The sample included 28 patients with postpyloric tubes and 24 with nasogastric tubes in the cardiovascular surgical intensive care unit (patients who died were excluded). The length of time to reach the goal feeding rate differed significantly (P < .001) between the 2 groups. Patients with the postpyloric feeding tube reached the goal feeding rate a mean of 30 hours earlier than did patients with the nasogastric tube. The postpyloric group also received a significantly higher proportion of tube feeding and calories in a 24-hour period than the nasogastric feeding group received.

This comparison of gastric versus postpyloric feeding tubes showed that cardiovascular surgical patients with postpyloric feedings reached their nutritional goal much earlier than did patients who received nasogastric tube feedings. In addition, patients with postpyloric feedings received more total volume per day and subsequently more calories per day. Promoting the nutritional status of cardiac surgical patients preoperatively and during postoperative recovery will enhance patients’ outcomes.

Nancy Richards, Mary Ann Comerford, Annette Doyle, Susan Windsor, Sharon Marsolf, Mary Reffett, Patty Stauffer; Saint Luke’s Hospital, Kansas City, MO

To determine if coordinating patient care activities during the night in an intermediate surgical cardiac care unit would decrease the number of entries of health care personnel into the patient’s room at night and increase patients’ perception of rest and satisfaction.

Studies have described the relationship of the intensive care unit (ICU) environment to sleep deprivation and postoperative confusion or delirium. Unit noise, lighting, and room entry by health care workers have been cited as major causes of sleep interruption. No studies to date have been focused on decreasing the environmental factors associated with sleep disturbances in the post-ICU phase of hospitalization.

A posttest-only, control group, experimental design was used to compare the coordination of care delivery during the night shift to usual nighttime care. The primary dependent variables were the number of room entries during the night by health care providers and the patients’ perception of rest and satisfaction. Subjects for this study were 80 postoperative, adult cardiothoracic surgery patients in the surgical intermediate care unit (SICC) who were randomly assigned to the control or experimental group. Data was summarized with descriptive statistics. Analysis of variance was used to compare dependent variables between the groups. The level of significance for all tests was P < .05.

The total number of room entries per participant ranged from 0 to 17 (mean, 3.7; SD, 2.6). The number of entries coded as required or nonrequired ranged from 0 to 17 (mean, 3.5; SD, 2.6) and 0 to 3 (mean, 2.25; SD, 0.6), respectively. Total number of room entries was significantly lower for the coordinated care group than for the control group: a 38% reduction of entries during the night shift. This difference was statistically significant. Visual analog scale (VAS) scores for patient satisfaction ranged from 22 to 100 mm (mean, 89.5 mm; SD, 15.2 mm), and rest scores ranged from 10 to 100 mm (mean, 73.6 mm; SD, 29.3 mm). Satisfaction and rest VAS scores were similar for the control and experimental groups, with no significant differences noted.

The number of room entries per patient during the night for patients in the coordinated care group was 38% lower than the number in the usual-care group. Although the coordinated care group had significantly fewer interruptions during the night, their perceptions of rest and satisfaction were similar to those of patients in the usual-care group.

Alice Reshamwala; Duke University Hospital, Durham, NC

To determine the effectiveness of a new cleaning protocol on surface contaminants of telemetry systems in our cardiovascular progressive care units. Study goals are to determine the effect of current cleaning practice on numbers and types of surface contaminants on telemetry systems, to determine if pathogens’ growth differs between 2 medical and 2 surgical units, and to evaluate the need for disposable lead wires.

Hospital-acquired infections caused by cross-contamination via multidrug-resistant surface contaminants are a threat to quality of care and increase costs of care. Studies show that nosocomial infections are associated with increased length of stay (12–18 days), and result in an estimated increased cost of $6.7 billion per year in the United States. Disposable electrocardiography wires have been suggested as a method of decreasing cross-contamination and infection rates; however, empirical data to support their use is limited.

A prospective, cross-sectional, controlled intervention study design was used to evaluate colonization of surface contaminants on telemetry systems. Each randomly selected telemetry system was its own control (preintervention) and case (postintervention). Swabs were done before and 5 minutes after cleaning with sodium hypochlorite wipes. Nurse investigators collected samples by using a standardized technique and refrigerated each sample within 2 hours of collection. Swabs were shipped to an independent laboratory for analysis. Organism colonization before and after intervention was analyzed by using the McNemar chi-square test; descriptive differences by unit also were analyzed.

Telemetry systems in 30 medical units and 29 surgical units were tested. Forty-one systems (69%) grew organisms before the intervention and 14 systems (24%) grew organisms after the intervention. The difference in organism growth before and after intervention was both clinically and statistically significant (P < .001). Surgical units had organism growth in 34% of systems (n = 14) and medical units had growth in 66% (n = 27). One surgical unit had a much lower rate of organism growth (19%) than the other 3 units (85%, 88%, and 93%). Reasons were explored by using descriptive environmental and practice-based analyses.

The significant decrease in colonization after use of a standardized cleaning strategy suggests that a standardized cleaning process that uses sodium hypochlorite reduces surface contamination and may result in decreases in nosocomial infection in progressive care units. The cost of disposable wires may be avoided by using a cleaning protocol. Future research should address variation between units in cleaning products used and cleaning processes and the transmission of surface organisms to the blood stream.

Ann Will, Regina M. Fink, Ann Will Poteet, Mary Beth Flynn Makic, Kathleen S. Oman, Janna Petrie, Barbara Krumbach; University of Colorado Hospital, Denver, CO

To describe the experience of patients receiving mechanical ventilation in the intensive care unit (ICU) at a university hospital. Specific aims of the study were to explore patients’ and their family members’ memories of pain, anxiety, distress, and dyspnea following mechanical ventilation and to correlate nurse-documented pain assessment with patients’ and family members’ reported memory of pain, anxiety, distress, and dyspnea during the mechanical ventilation experience.

ICU patients who receive mechanical ventilation often experience pain and distress, requiring sedation and analgesic medications. Few studies have examined patients’ memories of the ventilation experience or the congruence between nurses’ observations and patients’ report of symptoms. Research is needed to help understand ventilator patients’ pain and symptom experience, as well as the patient’s family members’ perceptions of the experience, in order to inform nursing management of ventilator patients.

This retrospective, descriptive study recruited subjects aged 18 to 89 years with any diagnosis from specialty ICUs. Each subject received mechanical ventilation for >10 hours and had been extubated for >10 hours before the interview. Exclusion criteria included patients with burns, tracheostomy, or cognitive impairment. The patient identified family members for inclusion. Informed consent was obtained from all participants. Data were obtained by chart review and survey and interview methods. Quantitative data were entered into an SPSS database and analyzed by using descriptive statistics and tests of difference and association. Qualitative data were coded and analyzed for categories and themes.

A total of 85 patients and 73 family members were interviewed, 49% of whom had sedation management protocol orders. Mean ventilation time was 100.3 hours. Medications used included fentanyl (92%), benzodiazepine (78%), and anesthetics (27%). Patients’ perception of anxiety, nightmares, and dyspnea were correlated with their pain perception (P < .001). Family members’ memory of pain was correlated with nurses’ pain assessment but patients’ memory was not. Themes identified in the qualitative data analysis of the patient experience were pain/discomfort, distress/anxiety, and communication/awareness of environment. Themes identified for the family were pain/discomfort, sedation/sleeping, and communication difficulties.

Patients and their family members identified difficulty with communication as a significant barrier during the mechanical ventilation experience. This highlights the need for further inquiry and development of effective communication tools such as the use of bedside reporting, patient communication boards, and teaching patients and their families about the ventilator’s purpose and function, use of medications, and weaning protocols to better inform patients and their families and engage their feedback.

Susan Schultz; University of North Florida, Jacksonville, FL

To evaluate the effectiveness of an interactive Web-based education program combined with unit-based collaborative learning activities on both telemetry staff nurses’ knowledge of dysrhythmias and their monitoring practices for patients at risk for wide QRS complex tachycardias. The project was based on the AACN Practice Alert “Dysrhythmia Monitoring.”

Standards of practice for hospital electrocardiogram monitoring were recommended in 2004 by the American Heart Association, although the recommendations are still not widely followed. Nurses who work in telemetry units in hospitals have an important responsibility to monitor patients’ cardiac rhythms; however, many nurses monitor in a single lead regardless of diagnosis and are unable to differentiate wide QRS complex tachycardias.

This interventional, 1-group, before-and-after cohort study design consisted of 4 components: an interactive Web-based educational program with a pretest and posttest, unit-based collaborative activities, competency skills validation, and audits of patients’ electrode placement and lead selection at baseline, 6 weeks, and 18 weeks. The education program and unit-based activities, which were conducted for 6 weeks, were focused on demonstrating correct electrode placement and lead selection for arrhythmias, identifying when and how to measure QTc intervals, differentiating wide QRS complex tachycardias, and describing the appropriate nursing interventions.

Of 42 nurses who worked on the unit, 34 consented to participate, 16 started the module, and 10 finished all the components. The pretest scores ranged from 0 to 60, with a median of 36.5. The posttest scores ranged from 47 to 93 with median of 83.5. The Wilcoxon signed rank test showed a significant difference between the pretest and posttest scores (P = .005). The audit results did not indicate significant differences in proportions of correct electrode placement and correct lead selection between baseline, 6 weeks, and 18 weeks. The unit-based collaborative learning activities and competency skills validation reinforced content taught in the module.

The program was effective in increasing nurses’ knowledge about dysrhythmias; however, it was not effective in changing monitoring behavior related to electrode placement and lead selection. This may be related to the small percentage of staff on the unit who completed the project. In order to improve patients’ outcomes, this type of program may be more effective if it involves all the staff members on the unit who are responsible for applying electrodes and selecting the monitoring lead.

Richard Arbour; Albert Einstein Healthcare Network, Philadelphia, PA

To determine effectiveness of earlier metabolic resuscitation in a prospective series of patients with severe traumatic brain injury (TBI) in a tertiary-care referral center. Secondary purpose was determining the optimal timing of metabolic/cellular-level resuscitation within the continuum of care before formal determination of death by neurological criteria, optimizing cardiopulmonary stability as part of cellular-level resuscitation, and analyzing impact on organ recovery.

Cardiopulmonary instability after catastrophic TBI and brainstem herniation typically occurs subsequent to multisystem consequences of brain herniation syndromes. Delays in following brain death protocols after terminal brain herniation increase risk of organ loss. Hormonal replacement therapy (HRT) is generally initiated after confirmed brain death and donation consent. Early HRT significantly improves cardiopulmonary stability and organ function and preserves the option of donation.

A prospective series of 10 patients after massive TBI who were exhibiting severe cardiopulmonary instability, total loss of neurological function, and refractory hypotension was evaluated. Eight of 10 patients (80%) were supported before brain death was declared solely with maximal lung ventilation, volume resuscitation, and dosing with inotropic agents or vasopressors. Before brain death protocols were implemented, 2 of 10 patients (20%) with cardiopulmonary instability and imminent cardiac arrest had HRT initiated early with administration of glucocorticoids, thyroid hormone, and vasopressin. Electronic patient data were retrieved, and time-sensitive changes in cardiopulmonary function and response to HRT were analyzed retrospectively.

Of the 8 patients in this series who did not receive HRT before brain death was declared, hypotension refractory to maximal vasoactive drug dosing occurred in 3 patients, increasing risk of organ damage after terminal brainstem herniation. One progressed to cardiac arrest with loss of all transplantable organs. In 2 patients in this series who received early HRT, marked improvement in oxygenation began within 1 hour, decreasing oxygen and ventilation requirements as well as requirements for vasoactive drug dosing. Pharmacological support for blood pressure was weaned off within 4 hours. Formal brain death protocols followed and families consented for donation, yielding 8 organs transplanted.

Cardiopulmonary instability after severe TBI identifies patients likely to benefit from HRT before implementation of formal brain death protocols. Early HRT was integral and decisive to resuscitation in those 2 patients as well as being an ethically sound, effective, and easy-to-use mechanism-based intervention in patients with apparent loss of all brain function but too unstable for implementation of formal brain death protocols. HRT is effective and appropriate for more widespread use after severe TBI.

Jean Christopher, Christine Perebzak, Carrie Gavriloff; Akron Children’s Hospital, Akron, OH

To determine whether used patient bath basins in the pediatric intensive care unit (PICU) at a free-standing Magnet-designated children’s hospital are a potential source of hospital-acquired infections.

Hospital-acquired infections are associated with significant morbidity, mortality, and economic burden to society, making them a major public health concern. Nurses in PICUs often use bath basins to bathe patients. While bathing a patient, mechanical friction releases skin flora into bath water that can contaminate basins. Upon reuse of the basin, bath water may serve as a conduit for biofilm-forming pathogens that can contaminate skin or wounds and may lead to hospital-acquired infection.

Investigators sampled basins and collected data for 2 days. The estimated study length was 2 days plus turnaround time for laboratory tests. Patient care-givers were blinded to the study. Twenty-one bath basins from PICU patients admitted at least 48 hours earlier and bathed twice from head to toe as confirmed in the patient’s record were sampled for contamination with gram-negative rods, methicillin-resistant Staphylococcus aureus, and vancomycin-resistant enterococci. The interior perimeter, walls, and base of the bath basins were cultured by using a culture sponge. The cultures were packaged and express mailed to an outside microbiological testing laboratory on the same day the samples were gathered.

The laboratory provided a summary of culture results. Seventy-one percent of bath basins were contaminated with various organisms, 62% of the basins grew gram-negative rods, 24% grew enterococci, 5% grew Staphylococcus aureus, 5% grew methicillin-resistant S aureus, and 24% grew vancomycin-resistant enterococci.

Future studies are needed to determine if a relationship exists between contaminated bath basins and hospital-acquired infections. The PICU Performance Improvement Committee will continue to weigh available evidence to determine the best method of bathing patients with the least risk for patient development of hospital-acquired infections.

Rebecca McLaughlin, Mary Jane Bowles, Tom Malinowski; Mary Washington Hospital, Fredericksburg, VA

To determine the best interval frequency to reposition the endotracheal tube of patients receiving mechanical ventilation to prevent skin breakdown on the oral mucosa and lips.

Endotracheal tube stabilization is a high-priority practice in intensive care units (ICUs). It is equally important to prevent iatrogenic sores on the patient’s mouth and lips when securing the endotracheal tube. No research has been done that would substantiate a time frame to reposition an endotracheal tube. We compared 3 interval frequencies (12, 24, and 36 hours) of endotracheal tube repositioning as a method of determining injury avoidance and observed differences in skin breakdown in intubated adult patients using a single, commonly available commercial tube holder.

After approval was received from the institutional review board, 449 intubated adults receiving mechanical ventilation and admitted to the surgical/medical ICU were prospectively enrolled in the study between July 2009 and April 2010. All endotracheal tubes were secured by using the ETAD Hollister Oral Endotracheal Tube Attachment Device upon arrival or upon intubation in the ICU. The endotracheal tube was repositioned during the initial ventilator check and at the required study interval. The respiratory therapist and nurse worked collaboratively to evaluate skin integrity and recorded observations on a data collection sheet. The baseline saw 128 consecutive patients on a 12-hour tube repositioning regimen (July–September 2009). The first phase (3 months, October–December 2009) placed 145 consecutive patients on a 12-hour tube repositioning regimen. The second phase (January–March 2010) saw 132 consecutive patients placed on a 24-hour repositioning regimen. The third phase was originally slated for April–June 2010, with >120 projected patients, but only 44 consecutive patients were placed on a 36-hour regimen. The primary outcome was the incidence of oral ulcers or skin breakdown at the site of the tube.

The baseline (repositioning frequency every 12 hours) showed an incidence of oral ulceration (events/patient) of 3.9%. A 5% incidence of oral ulceration was considered a clinically significant threshold. Phase 1 (repeated repositioning frequency every 12 hours) showed a 2.8% incidence of ulcers; phase 2 (repositioning frequency every 24 hours) showed a 4.5% incidence, phase 3 (repositioning frequency every 36 hours) showed a 15.9% incidence. The study was terminated early in the third phase because the incidence of events exceeded our threshold.

Our results indicate that endotracheal tubes should be repositioned at least every 24 hours to avoid a 5% incidence of skin ulceration. Further studies should be done to evaluate whether more frequent repositioning results in a lower incidence of ulceration.

Mary Hoffmann; The Christ Hospital, Cincinnati, OH

To evaluate the implementation of an evidence-based prophylactic amiodarone protocol for safety and efficacy in reducing the incidence of postoperative atrial fibrillation (POAF) after coronary artery revascularization (CAR). Also, we wanted to identify preoperative and perioperative risk factors associated with the development of POAF in our 16-bed cardiovascular intensive care unit, sited in a 550-bed tertiary care hospital.

POAF is the most frequent dysrhythmia after CAR and is associated with complications, additional therapy, and longer hospital stays. Prophylactic amiodarone protocols have been validated as safe and beneficial in the prevention of POAF after cardiac surgery. No optimal regimen has yet been identified. We wanted to evaluate if the use of our current treatment protocol, given prophylactically, would be feasible and demonstrate similar outcomes in POAF reduction.

A quasi-experimental design was used for this study, which was approved by the institutional review board. The prospective sample of 100 consecutive adults undergoing nonemergent CAR was started on amiodarone within 4 hours of ICU admission and continued through discharge. Exclusion criteria were history of atrial fibrillation or heart block and any contraindication to amiodarone. A retrospective chart review of the prior 100 patients served as the historical control. Data were collected on demographics, risk factors, and comorbid conditions. Length of bypass and total surgery times were perioperative indicators. Outcomes measured included the timing, incidence, and frequency of POAF. Data analysis was performed with SPSS (v17).

The sample was primarily male (76%) with a mean age of 62.7 years, ejection fraction of 50.2%, and 3.25 grafts placed in surgery. Many had diabetes (44%) and were taking beta-blockers preoperatively (49.5%). The demographic variables did not differ significantly between groups. Initially, POAF occurred most often on postoperative day 2. Age, lack of preoperative use of beta-blockers, higher body mass index, and enlarged left atria were predictors of POAF (R2 = 0.412, P < .05). No perioperative risk factors were found, and no complications occurred. Thirteen patients who met the criteria were not started on the protocol, and an additional 20 patients were not continued on the protocol through discharge.

Application of our short-term protocol was beneficial in determining preoperative risk factors for POAF. A larger sample size may have shown a significant reduction in the incidence of POAF. Inconsistencies and difficulties with initiation of the protocol in the early postoperative period were identified. Issues related to nurses’ readiness and computer order entry require further evaluation for protocol refinements in anticipation of further reduction in POAF and expansion to valve and combined CAR/valve surgeries.

Donald Grimes, Carol Daddio-Pierce, Donna Marie Lynch, Leah Szumita; Brigham and Women’s Hospital, Boston, MA

To examine the essential core data elements exchanged in a nursing handoff during the change-of-shift report in both intermediate and critical care units.

Numerous types of handoffs can be found in the health care arena. Breakdown in verbal or written communication has been the major cause of sentinel events in the health care industry. In fact, failures associated with handoffs may be among the most important contributors to preventable adverse events in health care. The lack of a hand-off protocol is a major contributor to medical errors.

A survey was sent via Internet to a convenience sample of 1551 nurses, with 418 surveys returned (27% response rate; 362 participants were enrolled, 86.6% of the total who returned surveys): 219 from intermediate care units and 143 from critical care units. Forty-one core data elements were observed and extrapolated from the institution’s current reporting practice. The survey was divided into questions from the perspective of giving report and questions from the perspective of receiving report. Three open-ended questions helped to identify reasons for interruptions, pieces of information that were needed, and participants’ opinion of what would improve the handoff. Questions totaled 107. Rating was done with a 5-point Likert frequency scale from “always” (5) to “never” (1).

More intermediate care nurses than critical care nurses reported use of a template and the intermediate care nurses reported a higher frequency of use. Both groups indicated that an opportunity for clarification of information was not an issue during a handoff with >90% frequency. Both groups similarly reported that orders for past shift/24 hours are reviewed usually or more often. The intermediate care group physically reviewed the patient as part of the process of the handoff more often than nurses in the critical care group reported doing so. Interruptions were reported more often by the critical care group. The same reasons existed for both groups, but the inverse was seen in each group for the reason specified.

The Joint Commission’s standard for handoff communication of using 2 sources for patient identification has not been fully assimilated into practice. The 2 groups used a variety of handoff reporting styles that could account for the lack of standardization as a combined group. Focus on patient identifiers are practiced uniquely by both groups. The focus on physical review by the critical care group could be inherently related to the nature of the critical care environment as well as repetitive patient continuity in patient assignments (eg, caring for same patient on a daily basis). Common themes to improve the handoff process identified by most participants in both groups included fewer interruptions, a standardized template, and a change in their current handoff report style.

Lauren Cote, Donna Hacek, Hongyan Du; NorthShore University HealthSystem, Evanston, IL

In this study, electrocardiography (ECG) leads from rooms in the intensive care unit (ICU) where patients had been placed in contact isolation because of colonization with multidrug-resistant organisms were cultured before and after routine cleaning. The purpose is to determine if ECG leads could potentially transmit organisms from patient to patient despite usual cleaning procedures.

Annually, 2 million patients (10% of patients admitted to US hospitals) will acquire a nosocomial infection. In 2007, nosocomial infections caused 88 000 deaths and cost between $28 billion and $34 billion. One-third of nosocomial infections are considered preventable. Data suggest that organisms are easily transmitted via hospital equipment, and cleaning has been shown to be inadequate. Colonization and decontamination of ECG wires has not been explored.

The cohort consisted of 10 patients admitted to the ICU for >48 hours who were in contact isolation for colonization, or infection, with a drug-resistant organism (baseline culture). After patients’ ICU discharge, ECG leads were cultured before and after routine “terminal cleaning” of the room as follows: the proximal hub and distal end of 3 of the lead wires were aseptically pressed into a sterile petri dish containing Mueller Hinton agar with sheep blood (sample A). After terminal cleaning, the hub and lead wires were pressed into a second petri dish (sample B). Cultures were incubated for 2 days at 37ºC. Organism culture and identification were conducted per the microbiology department’s protocol.

For the 10 patients, the organisms responsible for contact isolation included methicillin-resistant Staphylococcus aureus (n = 6), Acinetobacter baumannii (n = 2), Clostridium difficile (n = 2), Pseudomonas sp. (n = 2), and vancomycin-resistant enterococci (n = 2). For 2 patients (20%), cultures at baseline and precleaning (sample A) yielded the same organism. For 1 patient (10%), methicillin-resistant S aureus was cultured from sample B but not from the baseline sample or sample A. Cultures obtained after cleaning (sample B) did not grow baseline organisms for any patient. Multiple other organisms were cultured (in samples A and B) that were not present at baseline.

Routine cleaning of ECG leads may prevent colonization by drug-resistant organisms carried from patients. However other organisms were cultured on ECG leads. Cleaning decreased the colonies of organisms on the leads, but did not sterilize the leads. Our methods did not include culture for C difficile in samples A and B, so conclusions regarding the colonization of lead wires by C difficile cannot be drawn. It is unclear if the organisms cultured were a significantly large inoculum to transmit a nosocomial infection.

Emily Rhoades, Melissa Roach; Georgetown University, Washington, DC

To identify an effective strategy to decrease door to balloon (DTB) time for patients experiencing an ST-segment elevation myocardial infarction (STEMI). For these patients, time is of the essence when attempting to reestablish blood flow to the occluded coronary artery. This study investigated the effectiveness of the STEMI nurse role in decreasing DTB time of patients with STEMI. The STEMI nurse facilitates stabilization and rapid transport of the patient.

Heart disease is the leading cause of death in the United States. The correlation between decreased DTB time and improved outcomes for patients is well established. Numerous studies show a significant decrease in mortality when DTB time is less than 90 minutes. Some evidence indicates that interventions including a specialized nurse responder contribute to reducing DTB time. However, no previous studies had isolated this nursing role relative to reduced DTB time.

A nonrandomized quasi-experimental design was used to examine the effect of the STEMI nurse’s role on the DTB time. The data collection comprised a review of patient charts and the internal hospital cardiac quality control data base, and use of a structured data extraction form. A sample of 126 patients was selected before and after introduction of the STEMI nurse intervention. Descriptive statistics were used to determine whether or not the preintervention and postintervention groups differed significantly in age or sex. The means, standard deviations, and significance were then calculated by using a 2-tailed, paired t test. The significance was determined on the basis of an α level of .05.

Data for 126 subjects were analyzed: 63 before and 63 after the implementation of the STEMI nurse role. Among the 126 study subjects, mean DTB times differed significantly (P=.003) from before to after implementation of the STEMI nurse role. The mean DTB time in 2008 was 73.44 (SD, 21.70) minutes, whereas in 2009, after implementation of the STEMI nurse, mean DBT time had decreased to 61.81 (SD, 14.76) minutes. The time from the emergency department door to arrival in the catheterization laboratory decreased significantly (P=.03) from 45.35 (SD, 20.54) minutes to 38.00 (SD, 13.19) minutes in 2009 after implementation of the STEMI nurse role.

Our findings show that the STEMI nursing role contributed to a significantly faster DTB time. Of specific significance is that the time from the emergency department door to arrival in the catheterization laboratory was shortened. As use of a STEMI nurse was the only known policy or procedural change, it seems that the improved collaboration resulting in faster transfer of patients to the catheterization laboratory was accomplished by the STEMI nurse. This nursing role has the potential to improve patients’ outcomes and overall system function.

Sarah Roberts, Anita Smith, Jacqueline Lollar, Jan Mendenhall, Henri Brown, Pam Johnson; University of South Alabama College of Nursing, Mobile, AL

To determine if the addition of educational requirements to the existing component of clinical hours in the final practicum experience would increase the confidence level of senior nursing students in their triage and decision-making skills.

A review of the literature revealed that experience in the clinical setting increases triage and decision-making abilities. Use of algorithms and the delivery of patient-centered care enhance these skills. The ability to recognize patients’ problems, to intervene, and to prioritize appropriately are critical skills for all nurses. The identification of effective educational strategies to promote this ability is of great value to both nursing educators and students.

A quasi-experimental design was used and included a control group as well 3 educational interventions. Students were self-selected to participate in the study; the final sample size was 14. Students were randomized to 1 of 3 intervention groups or to the control group. The educational interventions included participation in an adult or pediatric Advanced Cardiac Life Support course, participation in simulations with debriefings, or a combination of both. Pretesting and posttesting were conducted by using the Triage Decision Making Inventory (TDMI), which uses a Likert scale. The TDMI has been evaluated with nurses from various clinical specialties and is reliable.

A mixed analysis of variance was used to examine the 4 groups over time. The TDMI scores of all 4 groups increased during the 16-week study. Nursing students who participated in both simulations with debriefing and completed an adult or pediatric Advanced Cardiac Life Support class showed the largest margin of increase in confidence scores. Although the sample size was small, this increase was statistically significant.

The pilot study had limitations, including a small sample size. Despite this limitation, the statistically significant increase in TDMI scores of the group participating in multiple pedagogies in addition to clinical hours in the final practicum experience merits further examination. The combination of educational interventions may increase both the skill set and students’ confidence in their abilities to make appropriate clinical decisions.

Natalie McAndrew, Annette Garcia, Carolyn Maidl, Jane S. Leske, Rahul Nanchal; Froedtert Hospital, Milwaukee, WI

To describe critical care nurses’ levels of moral distress and the effects of that distress on their professional practice environment in critical care. Specific research questions included the following: (1) What is the level of moral distress? (2) What is the perception of the professional practice environment? (3) What is the relationship between moral distress and the professional practice environment? (4) How does moral distress affect the delivery of nursing care?

Critical care nurses commonly encounter situations that are associated with high levels of moral distress. Unresolved moral distress is associated with emotional exhaustion, job burnout, fatigue, and feeling ineffective as a nurse. Although multiple studies have been performed in this area, none have examined the relationship of nurses’ experiences of moral distress and their professional practice environment in critical care.

A descriptive, correlational, prospective, survey design was used. Selected aspects of Corley’s Moral Distress and Laschinger and Leiter’s Nurse Worklife models guided the design and selection of variables. All professional nurses employed in critical care at a major medical center in the Midwest were eligible to participate. Nurses were asked to complete the Moral Distress Scale (MDS, intensity and frequency) and the Practice Environment Scale (PES). The PES is composed of 5 distinct areas: leadership and support, participation in hospital affairs, physician/nurse collegial relationships, nurse manager resource and staffing adequacy, and foundations for quality of care.

A total of 33% (78/235) of nurses from 4 critical care units participated. Scores on the MDS ranged from 0 to 6. Moral distress intensity (MDI) ranged from 0.13 to 5.89 (mean, 3.59; SD, 1.33). Moral distress frequency (MDF) ranged from 0.03 to 3.03 (mean, 1.75; SD, 0.69). Scores on the PES ranged from 1 to 4. Total scores ranged from 1.87 to 3.77 (mean, 2.82; SD, 0.36). The intensity of moral distress was negatively related to physician/nurse collegial relationships. The frequency of moral distress also was negatively related to leadership and support, participation in hospital affairs, nurse manager resource and staffing adequacy, and physician/nurse collegial relationships. Frequency of moral distress affected all aspects of professional practice except foundations for quality of care.

This study is the first to examine the influence of moral distress on professional practice in critical care. It is important to monitor the frequency of moral distress because it affects all aspects of professional practice except foundations for quality of care. Identification and implementation of strategies that will improve nurses’ sense of control over practice, teamwork, communication, and autonomy are needed.

Takeshi Unoki, Yuji Kenmotsu, Takeharu Miyamoto, Akiko Makino, Ryuichi Yotsumoto, Mie Sato, Hideaki Sakuramoto, Taro Mizutani; St Luke’s College of Nursing, Tokyo, Japan

To evaluate the Japanese-translated Intensive Care Delirium Screening Checklist (ICDSC) as an alternative tool for the Confusion Assessment Method for the Intensive Care Unit (CAM-ICU) to detect delirium in ICU patients receiving mechanical ventilation.

Studies indicate that ICU staff frequently fail to recognize delirium without use of an objective assessment tool. The CAM-ICU has been validated for use in patients receiving mechanical ventilation. Unlike the CAM-ICU, the ICDSC is a simple checklist that does not require the patient’s cooperation, and although used in patients receiving mechanical ventilation, its validity in this population is not well established.

We developed a Japanese version of ICDSC by using a back-translation method. A convenience sample of adult patients receiving mechanical ventilation who were admitted to a medical-surgical ICU or an ICU specialized in emergency medicine was used. Patients with neurological disease, persistent coma, or who were receiving neuromuscular blockade were excluded. Assessments were made only when the subject had a score greater than −4 on the Richmond Agitation Sedation Scale (RASS). An investigator assessed the CAM-ICU, whereas the bedside nurse assessed the ICDSC independently. Both assessors were blinded to the other’s assessments. Delirium was defined as a score >3 on the ICDSC.

Forty-seven patients were assessed, resulting in 152 paired delirium assessments. Patients (mean age, 67 years; SD, 13 years) were receiving sedatives in 96% of assessments. Ninety-seven percent of assessments were with patients with a RASS score between −3 and 0. Delirium was detected in 85 assessments (56%) with the CAM-ICU and in 78 assessments (52%) with the ICDSC; the agreement rate was 67%. The sensitivity and specificity of ICDSC were 68% and 69%, respectively.

Our findings suggest that the Japanese version of ICDSC often did not allow recognition of delirium that was detected with use of the CAM-ICU. These findings may be attributed to insufficient reliability of the translated ICDSC and/or to an inadequate cutoff point in this population.

Denise Li, Kathleen Puntillo; California State University, East Bay and University of California, San Francisco, Hayward, CA

A substantial number of intensive care patients reported pain caused by nursing care activities. Underassessment of pain is a greater issue when sedation hinders patients’ ability to self-report or to display behaviors for coping with pain. This study aims to determine if findings from our prior work with a homogeneous patient sample can be replicated in a heterogeneous population of ICU patients. The goal is to extend current knowledge on whether certain autonomic reactions add to the assessment of nociception in deeply sedated patients who are unresponsive to external stimuli.

Autonomic reactions to noxious stimulation (or nociception) occur independently of patients’ sedation level, pain perception, and behavioral responses. Persistent nociception induces physiological instability and adversely affects health outcomes. Although behavioral pain tools have shown substantial validity and reliability, they are of limited use when sedated patients become unresponsive to external stimuli. Exploring an alternative method of nociception that can add to current objective assessment of ICU patients’ pain is needed.

A convenience sample of sedated adult patients receiving mechanical ventilation in the ICU who had been admitted with general medical diagnoses or traumatic injuries. The prospective descriptive study made repeated measures (baseline, during, and 30 minutes after the procedure) of heart rate, blood pressures, pupil sizes, and cortical awareness per the bispectral index (BIS) while each patient underwent turning or suctioning procedures by patients’ nurses who were not involved in the study. Patients received sedation and analgesia per physician order that was titrated at nurses’ discretion. Differences in autonomic outcome variables across time were tested by using a general linear model followed by post-hoc contrasts to examine within-subject differences, with the significance level set at .05.

The sample is mostly elderly men who were white, Hispanic, or Asian. Many of them (67%) had baseline sedation states of total unresponsiveness or responded only to noxious stimulation. Some patients received propofol only (7%) or had no sedative or opioids (10%); the remainder received low-dose propofol and opioid infusions. Changes in autonomic reactions across time reached statistical significance (P < .05). Of clinical importance, the greatest change in autonomic reactions occurred between the resting state and the noxious procedural state. Between these 2 states, the mean change for heart rate was +5/min, for systolic blood pressure was +10 mm Hg, and for the BIS index was +17. Mean change in pupil size in patients who received propofol only or no sedatives/analgesics was +1.5 mm, and a modest change of +0.5 mm in patients receiving infusions of opioids.

The study found positive autonomic reactions induced by noxious nursing activities. Changes in autonomic reactions were more notable than in other populations of patients, most likely because patients with critical medical illnesses or traumatic injuries are at higher risk for pain. The change values of heart rate, systolic blood pressure, pupil size, and BIS index offer preliminary insights into minimally required clinical parameters that indicate potential nociception. Future studies should evaluate how these autonomic parameters add to ICU nurses’ pain assessment and management practice in deeply sedated patients.

Diane Aho; Regions Hospital, St Paul, MN

To look at the relationship between nurse managers’ span of control and effectiveness in order to define an optimal span of control for a front-line nurse manager, while taking into account individual manager’s education and experience as well as the complexity of the unit being managed. In essence, “How big it too big?” This study will ultimately attempt to show at what point a span of control becomes too large and a manager’s effectiveness is adversely affected.

Correlating measurements of span of control with effectiveness criteria and demographics of managers will help to identify the optimal span of control. With the looming nursing shortage and the increasing complexity of health care, retention of valuable staff and the ability to recruit valuable staff to nursing units has never been greater. Mangers with optimal spans of control that allow them to be visible, available, and interactive will be necessary for successful organizations in the future.

Twenty-four inpatient nurse mangers from Regions Hospital in Saint Paul, Minnesota participated in this study. Managers completed a tool that measured span of control, including the complexity of each nursing unit as well as the number of direct reports for their current unit. Manager effectiveness data or a manager’s ability to meet organizational goals including but not limited to financial, patient outcomes, patient satisfaction, and employee satisfaction goals were obtained for each manager. Demographic data for each manager were obtained including experience and educational preparation. Data were subjected to statistical review both with correlation and descriptive analysis for all variables.

Although sample size and variability limited the statistical correlation possible in this study, modestly significant correlations were found between employee engagement scores and both span of control and manager experience. Trends toward positive correlations were noted with manager experience to quality indicators and patient satisfaction scores. These trends were also noted when span of control was correlated with timely performance appraisals and patient satisfaction scores. These correlations were as expected in this study and supported through an extensive literature review. The correlations between manager experience and outcomes support Benner’s theory, which was the basis for much of this study.

Results of this study support published reports linking the role of the nurse manager to patients’ outcomes and satisfaction of patients and employees. The demands of a shrinking nursing workforce and the financial demands of health care reform require nurse mangers to be visible and engaged at the unit level. Managers can do this only with appropriate spans of control. Further research and discussion must be done to ensure that nurse leaders define spans of control for nurse managers.

Kristen deGrandpre, Patricia Cupka, Susan Fowler; Gagnon Cardiovascular Institute, Morristown Memorial Hospital, Atlantic Health, Morristown Memorial, NJ

To investigate nurses’ perception of their workload and the role they play in achieving patients’ outcomes associated with induced hypothermia after cardiac arrest.

Survival to hospital discharge after cardiac arrest outside the hospital has a national average of around 5%. Patients often have neurological deficits due to lack of perfusion during their cardiac arrest. Induced hypothermia has been proposed as a possible treatment for survivors of cardiac arrest. Animal studies have confirmed the benefit of induced hypothermia. Preliminary clinical studies in humans have also suggested that inducing hypothermia may lead to improved neurological function after cardiac arrest. Research has shown that reduction in brain temperature immediately after cardiac arrest can help to decrease brain damage. The International Liaison Committee on Resuscitation recommends inducing therapeutic hypothermia after cardiac arrest for unconscious adults. Physiologically, hypothermia stabilizes the enzyme reactions, production of free radicals, and release of excitatory neurotransmitters that occur during ischemia and reperfusion. Induced hypothermia decreases release of free radicals, leading to a reduction in the loss of brain cells. The reduction in the release of excitatory neurotransmitters reduces the risk of seizures during and after reperfusion. Ischemia disrupts the blood/brain barrier, causing cerebral edema and extravasation of fluid into the cerebral tissue. Although the exact mechanism of action is unknown, inducing hypothermia helps maintain the integrity of the blood/brain barrier, which causes a decrease in cerebral edema after an ischemic event. A literature review on induced hypothermia conducted in 2008 before the start of our research yielded 10 citations when the search was limited to randomized controlled trials: 1 was done in rats, 1 was a comment, 1 was in cyanotic patients with heart disease, 1 had no randomization to hypothermia, 2 measured just serum levels of markers, 2 were feasibility studies, and 2 were suitable randomized controlled trials. In a trial by Bernard et al, 49% of the patients treated with hypothermia were discharged with positive neurological outcomes; in the Hypothermia After Cardiac Arrest (HACA) trial, 55% had similar outcomes. Only 26% in the normothermic group were discharged with positive neurological outcomes in the Bernard et al trial, compared with 16% in the HACA trial.

A convenience sample was taken of registered nurses working in the emergency department and coronary care unit at Morristown Memorial Hospital who cared for patients undergoing induced hypothermia after cardiac arrest between January 2008 and February 2009. All nurses were direct care providers for 1 of our first 13 patients receiving the protocol. Approval from the institutional review board and informed consent from participants were obtained before any data collection involving all nurses. The investigators collected data by conducting one-on-one or group interviews consisting of 9 open-ended questions revolving around the themes of hope, time, nursing workload, and outcomes for patients and their families. The interviews were taped to promote open discussion and were later reviewed and transcribed by an investigator. A nurse researcher was consulted to review the transcriptions and identify common themes for accuracy and interpretations. Any additional patient data were obtained by concurrent or retrospective chart reviews.

Through interviews, it was determined that the nurses felt they made individualized and generalized differences in all cardiac patients’ outcomes, regardless of use of induced hypothermia. The nurses thought that this was an essential part of nursing that brings them significant satisfaction. The nurses went on to comment that induced hypothermia magnified these outcomes, especially when a positive outcome was achieved. The nurses identified that induced hypothermia created a tremendous increase in their normal workload, but the work was clearly worth the benefit. All nurses commented that because of the precise timing of tests/interventions when inducing hypothermia, they were always watching for when the next task was due. They thought that their shifts went by very quickly with the high intensity of work involved.

Throughout the process of inducing hypothermia, the nurses were appreciative of their colleagues and additional support staff (social workers, patient liaisons, clinical specialists), who were able to focus on the families in crisis while the nurses focused on the patient. The nurses identified that even when they did not experience a positive outcome, their patients’ families felt a sense of peace and understood that we had done all that we could for their loved ones. The nurses also commented that they felt more of a bond with these patients’ families as they had traveled through the 36- to 48-hour journey of hypothermia together. The nurses thought that following this new protocol had strengthened their interdisciplinary relationships with social workers, care managers, respiratory technicians, and the physicians, as well as other nurses. All nurses commented that overall they think that hope is the expectation of an improved future or possibilities/optimism. They thought that hypothermia provided hope to families. The nurses were careful not to foster false hope with families while answering questions about the patient’s outcome when the prognosis was still unclear. Overall, nurses were enthusiastic about initiating the program and the protocol and were happy to be a part of a team that is focused on constantly trying to improve and learn as new technology emerges.

Martha Willis, Megan Horsley, Mary Pat Alfaro, Jeffrey Anderson, Peter B. Manning, Catherine D. Krawczeski; Cincinnati Children’s Hospital Medical Center, Cincinnati, OH

Inadequate nutritional intake during the cardiac surgical period in children can affect mortality and surgical outcomes. Little is known about the relationship between failure to thrive, nutritional management, and surgical outcome at the time of the Fontan procedure. With this knowledge, we have recently developed and implemented guidelines to improve nutrition in children undergoing the Fontan procedure. No study has been done to assess improved nutrition after the Fontan procedure.

Children with single ventricle physiology are at risk for failure to thrive and malnutrition. Malnutrition in hospitalized patients leads to increased complication rates, longer hospital stays, and greater hospital costs. Similarly, poor nutrition as defined by low weight for age has been linked to increased mortality in children admitted to the hospital with serious infections. Few studies have been done to examine the relationship between nutritional status and surgical outcome.

Our guidelines include speaking to the patient’s caregivers before surgery and encouraging them to give their child a high-calorie diet, increased protein, and a daily multivitamin and mineral supplement with zinc. A nasogastric tube will be placed at the end of surgery. If the child is not taking adequate calories on postoperative day 1, 50% of the child’s required calories will be administered for 9 hours overnight via the nasogastric tube. As the child’s oral intake improves, the child will be weaned off of the nasogastric feedings, with the goal of 100% oral feeding at the time of discharge. We reviewed and compared charts of patients undergoing the Fontan procedure in the previous 2 years with charts of patients who received the new nutritional guidelines.

Before implementation of the guideline, patients had a mean length of stay of 10 days and 2 patients had wound infections during their hospitalization. The chest-tube duration was 5.5 days. When these results are compared with the results for patients who received nutritional education preoperatively and more aggressive enteral feedings postoperatively, we found that the mean length of stay is 9 days, 1 day less than before the implementation of the guidelines. The chest-tube duration was 5 days, 0.5 days less than in the preintervention group (P = .03) Finally, no wound infections occurred after the intervention.

Although these guidelines were recently implemented, the preliminary results show that diligent nutritional assessment and better preoperative and postoperative nutrition can improve outcomes in patients undergoing the Fontan procedure. With continued implementation of these guidelines, postoperative patients will achieve their caloric goals more rapidly, which will result in continued decreases in length of stay and wound infection rates.

Mary Hardy; University of Virginia Health System, Charlottesville, VA

To describe the outcomes of and reasons for activation of a medical emergency team (MET) in patients who had been transferred from the intensive care unit (ICU) to intermediate or acute care within 24 hours and to compare them with outcomes and reasons for patients who had not been transferred from an ICU within the past 24 hours.

Patients readmitted to the ICU have mortality rates 2 to 10 times higher than mortality rates for patients not readmitted. However, little is known about outcomes for patients who require MET activation within 24 hours of transfer out of ICU. Acute physiological instability soon after transfer out of the ICU intuitively would seem to increase inhospital mortality and need for ICU readmission. MET activation has been proposed as a way to improve outcomes and decrease mortality in patients experiencing physiological instability outside of ICU.

Analysis of retrospectively and prospectively acquired data for 1 year on all adult patients who require an MET response with intervention at a 600-bed level I trauma center in the Mid-Atlantic region. During a 1-year period, 1240 MET calls occurred in which workup and interventions were required. At the time of analysis, 1081 patients who required MET intervention had been followed up until hospital discharge, including all calls in which patients had been discharged from the ICU within 24 hours. During this period, a MET nurse without other clinical responsibilities was available for response at all times.

Eight percent of MET calls involved patients discharged from the ICU within 24 hours. Patients requiring MET response within 24 hours of ICU discharge had rates of survival to hospital discharge (80.7%) equivalent to rates for other patients who required MET response (80.9%). Forty-nine percent of calls (43) for patients discharged from an ICU within 24 hours were for acute changes in respiratory status compared with 32% (320) for that reason among all other patients. However, patients who required transfer to a higher level of care at the conclusion of the MET call had a significant increase in mortality (15% vs 24% of patients requiring transfer to ICU, P < .001).

Patients requiring MET response and intervention within 24 hours of ICU discharge are not at increased risk of hospital mortality compared with other patients who require MET calls. The most common reasons for MET activation in the post-ICU group are changes in objective respiratory findings, whereas in other patients, signs of cardiovascular deterioration (heart rate, blood pressure, or rhythm) are more common reasons. Need for transfer to a higher level of care after a MET call is strongly correlated with higher mortality rate.

Sharon Dickinson, Connie Rickelman, Dana Tschannen, Gombert Jan; University of Michigan Hospital and Health Center, Ann Arbor, MI

To determine patient and clinical characteristics related to the development of pressure ulcers in the surgical intensive care unit (SICU).

The Agency for Healthcare Research and Quality has defined pressure ulcers as an important indicator of patient safety. Despite evidence-based guidelines and protocols for prevention and treatment of pressure ulcers, sustained success in reducing development of pressure ulcers is elusive in many hospitals. The pressure ulcer rate in the SICU exceeded the national benchmark of 7% for 5 of the past 9 months.

All 1348 patients admitted to the SICU from January 2008 to August 2009 were included in the analysis. Patient characteristics (Braden score on admission and daily subscale scores, age, sex, comorbid illness, and mortality risk) and clinical care characteristics (number of vasopressor medications, missed turns, nutrition status, body mass index, weight, and operation time) were collected from electronic medical records. In addition, a delta Braden score (ie, change in Braden score from admission to the current day) was computed for each of the subscales: sensory perception, moisture, activity, mobility, nutrition, and friction, and shear. Analysis of variance and binary logistic regression models were used to analyze data.

Of the 1348 patients, 256 had pressure ulcers develop while in the hospital. According to analysis of variance, patients with pressure ulcers were more likely to have advanced age, greater use of vasopressors, change in scores on Braden subscales (ie, sensory, moisture, mobility, friction, nutrition), and mortality risk. In addition patients with pressure ulcers had shorter time in the operating room, smaller change in score on the Braden activity sub-scale, and reduced weight. When incorporated into the binary logistic regression model, 6 variables were significant predictors of development of pressure ulcers: first Braden score, delta Braden moisture and activity, mortality risk, age, and weight.

Several noteworthy patient and clinical characteristics were related to development of pressure ulcers. This study showed that the Braden score on admission can be used to identify patients at increased risk of development of pressure ulcers. Other high-risk factors (eg, low weight, advanced age) allow unit leaders to implement appropriate clinical interventions to manage these conditions in an effort to prevent pressure ulcers.

Mary Beth Makic, Karen Lovett, M. Fareedul Azam; University of Colorado Hospital, Aurora, CO

To use high-definition simulation to demonstrate competence of registered nurses in the placement of esophageal temperature probes (ETPs). Specific study aims were to demonstrate accurate placement of the ETP and effectiveness of high-definition simulation with anatomic imaging as a valuable platform for education and competency assessment.

Current guidelines state unconscious survivors of cardiac arrests should be treated with therapeutic hypothermia. Research suggests that core temperature monitoring is most effective for determining accurate temperature monitoring and therapeutic hypothermia interventions. ETP is considered a method of assessing core temperature. Accurate placement of the ETP in the distal part of the esophagus is necessary to reflect core temperature near the left atrium. Literature is lacking to guide the placement of ETP monitoring devices by nurses.

Investigators were competency trained by an anesthesiologist on proper insertion of an ETP device by using high-definition simulation with anatomic enhancement technology. Nurse participants were provided a 30-minute educational session that used a 3-dimensional high-definition program to review critical elements for placing an ETP with anatomic landmarks needed for optimal core temperature monitoring. After the training, the study participant demonstrated the skill in simulation by using anatomic imaging to assess correct placement of the device. Study participants completed a survey on the effectiveness of using high-definition simulation in acquisition of a new skill, ETP placement, before and after the training.

A total of 32 nurses participated in the study. The mean number of years of practice in intensive care units was 4.7 years. Survey results indicated that participants had increased confidence in their ability to safely place an ETP (mean score: before, 2.66; after, 4.66); increased knowledge for placement (mean score: before, 2.50; after, 4.27). Nurses reported high satisfaction (mean, 4.88) with high-definition simulation for learning a new skill. Nurses did not demonstrate difficulties in the skill; however, 53.1% of participants needed >1 attempt to accurately measure the ETP for optimal anatomic placement in the distal part of the esophagus. Nurses overestimated depth of ETP insertion needed for optimal placement, which would result in temperatures being measured distal to the left atrium.

Current literature is lacking to guide accurate blind placement of an ETP to obtain core temperature for therapeutic hypothermia. Nurses competently place several blind tubes, such as gastric tubes; however, ETP placement is highly dependent on accurate anatomic location for optimal temperature measurement. Providing anatomic imaging with skill competency acquisition enhanced nurses’ awareness of accurate measurement for ETP placement near the left atrium for core temperature monitoring.

Tara Sacco, Megan E. Harvey, Gail L. Ingersoll, Susan Ciurzynski; University of Rochester Medical Center, Rochester, NY

The healthy work environment initiative has had great impact in the quality of the nursing workplace. The critical care setting provides opportunities for nurses to experience compassion satisfaction but also exposes staff to individual and familial crises that may result in fatigue and burnout. Therefore, the purposes of this study are to determine the prevalence of compassion satisfaction and fatigue in this population and to describe contributing demographic and organizational factors.

Critical care nurses often express that they are mentally and emotionally affected by their professional experiences. The notion that helping clients can affect caregivers’ emotional and mental health has been described as vicarious traumatization and secondary traumatic stress. These concepts are well defined in mental health counselors and recently have been studied in small populations of nurses, but no one has studied these phenomena in critical care nurses.

Adult, pediatric, and neonatal critical care nurses in a large teaching hospital in Upstate New York will be invited to complete a demographic survey and the Professional Quality of Life (ProQOL) Concise 9 survey in the fall of 2010. The ProQOL uses 3 sub-scales to measure compassion satisfaction and compassion fatigue (secondary traumatic stress and burnout). The surveys will be distributed electronically with the survey software SurveyMonkey. Additionally, participants will be instructed to enroll into a password-protected online platform where they will receive an incentive for their participation. Survey completion is voluntary, and data will be analyzed with no identifiers.

Data from the ProQOL tool will be scored and summed and then converted to t scores for analysis. Descriptive statistics will be used to summarize demographic data. Correlation and regression analysis will be used to assess relationships between demographic variables and presence of compassion satisfaction and compassion fatigue. Analysis of variance will be used to compare ProQOL scores by unit, by population of patients served, and by nursing service. Additionally, reliability and validity statistics will be determined in this study population. Findings of the study will be disseminated and may be used to develop prevention and/or intervention plans.

The study aims to determine the professional quality of life in a sample of critical care nurses. The findings will be used to highlight characteristics of units with high degrees of compassion satisfaction and to develop units experiencing a high prevalence of compassion fatigue. Programs can then be applied to prevent compassion fatigue. As these are put into place, the nursing work environment will become healthier, leading to increased recruitment and retention and better outcomes for patients.

Brianna Czaikowski, C. Todd Stewart; St Joseph’s Children’s Hospital, Marshfield, WI

To enhance clinical neurological assessment of intubated and/or sedated patients in the pediatric intensive care unit (PICU) at St Joseph’s Children’s Hospital by assessing the state of consciousness. Study proposes to (1) test the interrater reliability of the pediatric Full Outline of Unresponsiveness (FOUR) Score coma scale and (2) evaluate the validity of the pediatric FOUR Score scale in predicting mortality, morbidity, and long-term outcomes compared with the Glasgow Coma Scale (GCS).

The GCS is used in prehospital and hospital settings to predict mortality, morbidity, and long-term outcomes in acute neuroscience patients. Shortcomings of the GCS are as follows: (1) the verbal component cannot be tested in intubated patients; (2) the GCS does not include brainstem reflexes or changes in breathing patterns, which reflect severity of coma; (3) recent studies have shown a lack of correlation between outcome and GCS score; and (4) the GCS does not include developmental milestones.

A prospective study design, using methods applied in previous studies. We modified the FOUR Score for pediatric patients, with a goal of prospectively studying the pediatric FOUR Score in 120 patients. Participating nurses received training including a discussion session, a PowerPoint presentation and detailed examples of the proper use of both scales. Each PICU patient will be simultaneously assessed by 2 experienced and trained nurses using the pediatric FOUR Score scale and the GCS for 2 consecutive days, and then the patient will be assessed at PICU and hospital discharge by using the Pediatric Cerebral Performance Category (PCPC) Scale. Each nurse will assess the patient only once to avoid bias.

Data collection has started as of September 1, 2010, in the PICU at St Joseph’s Hospital in Marshfield, Wisconsin. We will test the hypothesis that for pediatric intubated patients, an excellent interrater reliability in total score of FOUR Score scale (κw = 0.90), with a low limit of the 95% confidence interval (CI) ≥ 0.80 will be achieved. Receiver operating characteristic curve (area under curve) and its 95% CI with be calculated for the pediatric FOUR Score and the GCS. In theory, the pediatric FOUR Score and the PCPC will highly correlate and the GCS and PCPC will have a weaker correlation. The PCPC scale has been previously validated and tested among patients.

This modified scale would provide many of the same advantages as shown in the previous study of the original FOUR Score. Interrater reliability among examiners, simplicity, and elimination of the verbal component would make the FOUR Score a more valuable neurological assessment tool than the GCS. The modified pediatric FOUR Score has the potential to provide proper neurological/sedation assessment, thereby decreasing the number of ventilator days and decreasing the incidence of ventilator-associated pneumonia, among other advantages.

Lori Hadas, Kevin Accola, Mike Butkus; Florida Hospital, Orlando, FL

To determine whether the incidence of left pleural effusions is reduced by leaving a Blake drain in the left pleural space for several days after coronary artery bypass grafting (CABG) surgery, using the left internal mammary artery (LIMA), versus using a traditional large-bore chest tube, which is typically removed on postoperative day 1.

The LIMA has been accepted as the conduit of choice for CABG surgery because of its superior long-term patency rate. The harvest of the LIMA, which typically necessitates pleural dissection, may contribute to the development of pleural effusion. Standard practice has been to place a large-bore chest tube, which is typically removed on postoperative day 1. Pleural effusions, which may necessitate thoracentesis to alleviate symptoms, often develop in these patients.

A retrospective comparative analysis was performed on the study population, which comprised 1 surgeon’s cohort of patients who underwent CABG surgery with the LIMA graft at 1 center. The historical group, group A (n = 200), comprised patients who underwent cardiac surgery and had a large-bore chest tube placed during the operative procedure, removed on postoperative day 1 or day 2 during 2007. The comparative group, group B (n = 200), included patients who underwent cardiac surgery and had a Blake drain placed during the operative procedure during 2008 and 2009. The project was reviewed and approved by the hospital’s institutional review board.

In the Blake group, 16 patients (7.3%) underwent a thoracentesis, whereas in the non-Blake group, 37 patients (20.1%) did not undergo thoracentesis (P = .001). This difference indicates a significant reduction in postoperative thoracentesis. Thoracentesis is not without risks and often necessitates the withholding of anticoagulants, such as warfarin, which places the patient at even greater risk of thromboembolism until the international normalized ratio is within therapeutic range. Patients who need a thoracentesis are usually symptomatic, require prolonged oxygen therapy, and have limited ability to ambulate and mobilize because of complaints of shortness of breath.

Leaving a Blake drain in the pleural space after CABG surgery can decrease the incidence of postoperative thoracentesis. Placement of the drain, however, is only 1 aspect of this updated practice. There are specific nursing implications regarding the care and maintenance of the drain that will maximize its effectiveness and promote earlier removal, such as maintaining tube patency, keeping the JP bulb empty and charged, providing optimal pain management, and reporting intake and output accurately.

Jeanette Faulk; Utah Valley Regional Medical Center and Brigham Young University, Provo, UT

To determine if demographics or financial compensation influences intensive care nurses’ participation in research.

Evidence-based practices are essential for safe and effective nursing care. Research findings may be used to establish nursing practice guidelines and standards of care. Although many hospitals and nursing schools encourage research activities, some nurses are hesitant to initiate or participate in nursing research. Limited information about factors that influence nurses’ participation in research has been published. What influences clinical nurses to participate in research?

A 10-item questionnaire was designed, approved by the institutional review board, and distributed during staff meetings to 2 groups of critical care nurses in an urban hospital in the Western United States. Participation was voluntary and anonymous. Consent was implied by completing the questionnaire. Nurses were instructed to read the questionnaire and circle the answer that best described them. Five questions asked for demographic information. The other 5 questions queried interest in research participation. Data were electronically analyzed by using a Fisher exact test (SAS, Cary, North Carolina).

A total of 63 nurses participated in the study (50 female, 13 male). Female nurses in the intensive care unit are significantly more willing to participate in nursing research than are male nurses, especially if they are financially compensated (P = .03). Most female nurses (51.5%) reported that they would participate in 1–5 hours of compensated research per month, compared with less than 5% of the male nurses. Nurses between the ages of 20 and 25 years (17.4%) and 50 to 55 years (21.54%) were more willing to participate in 1–5 hours of uncompensated research a month (P = .03). In all other age groups, 13% or less of the nurses indicated that they would participate in research without compensation. One intensive care unit reported much more satisfaction with research than the other.

Demographics and financial resources play a key role in some critical care nurses’ decisions to participate in research. This study suggests that the nurses sex and age and the lack of financial compensation are significant factors associated with interest in research. Male nurses were less willing to participate in nursing research than female nurses. Younger and older nurses were more willing to participate in unpaid research activities than were middle-aged nurses. These findings may be considered when planning critical care nursing research studies.

Karen Oberman, Helen Sereda Pawluk, Beth A. Staffileno, Denina McCullum-Smith; Rush University Medical Center, Chicago, IL

(1) To reduce the number of patients’ falls to zero through implementation of fall prevention initiatives and (2) to examine factors unique to patients’ falls.

The rate of patients’ falls per year has increased since 2006 in our unit. To increase patient safety, change to current practice was needed. We revised current practice to reduce fall rates to zero. Several nursing initiatives were implemented.

A retrospective review to compare monthly fall rates from before (August 2009) to after (July 2010) implementation of fall-prevention initiatives. Fall rate is defined as the number of falls per 1000 patient days. Yearly fall rate trends for 2005 through 2010 were plotted for comparison. Factors unique to patients’ falls were assessed through chart review. The Safe Project, implemented in August 2009, consisted of mandatory in-service training. The staff learned how to screen for patients at serious risk o falling and identify a safety officer on each shift for patient rounds.

The median fall rate before the fall-prevention initiatives was 6.06 compared with 3.03 after the fall-prevention initiatives. Factors unique to patients’ falls were calculated on the basis of 36 cases after the fall-prevention initiatives. Five unique factors in particular were related to falls (August 2009–July 31, 2010): (1) neurological/cognitive deficit (94%), (2) activity level (94%), (3) comorbid conditions (100%), (4) on fall safety precautions (100%), and (5) nurses with less than 5 years experience on the unit (94%).

Analyzing the safe initiative following ‘per protocol’ procedures (ie, using the safety officer from August 2009–May 2010), fall rates were reduced 59% (median, 6.06 vs 2.48 patient falls per 1000 patient days). Analysis of the safe initiative after intention-to-treat procedures (ie, changes in the safety officer occurred during June and July, using all data August 2009–July 2010) showed a 50% reduction in fall rates (6.06 vs 3.03 patient falls per 1000 patient days).

Theresa Haley; University of California, Los Angeles, CA

Patients with sepsis continue to challenge critical care teams daily. Despite this, little is known about the incidence of systemic inflammatory response syndrome (SIRS) and sepsis in the emergency department, where most of these patients present to the hospital. In addition, it has yet to be determined how best to identify emergency department patients who show early signs of either SIRS or sepsis. The purpose of the study was to describe the incidence of individual SIRS criteria and to identify early demographic and clinical correlates of SIRS and sepsis diagnoses in the emergency department.

Shock and sepsis are serious medical conditions associated with inadequate tissue perfusion. If the conditions are not diagnosed and treated promptly, a deadly chain of chemical reactions results in multiorgan failure and death. Reports estimate that up to 60% of patients with sepsis present first in the emergency department, and studies have shown that early identification of sepsis by using SIRS criteria can improve survival rates.

The study was a descriptive, correlational design carried out at an academic, level III hospital and a community, level II hospital in Los Angeles. A random sample of 556 individuals was used to determine the incidence, and a random sample of 130 (power of 0.80) individuals, half with SIRS and half non-SIRS, matched for age and sex, was reviewed to identify early demographic and clinical correlates of SIRS. Demographic and physiological variables that include age, temperature, heart rate, respiratory rate, white blood cell count, blood pressure, mental status, presumed infection, lactate levels, and presence of a SIRS diagnosis were analyzed.

Triggers occurred 777 times, among 556 patients, within 6 hours of presentation to the emergency department. Of the 10 most robust variables used in the logistic regression, elevated body temperature (P = .04; odds ratio [OR], 9.624; 95% confidence interval [CI], 1.096–84.505), increased respiratory rate (P < .001; OR, 92.250; 95% CI, 16.317–521.535), and altered mental status (P = .02; OR, 0.660; 95% CI, 0.471–0.925) correlated significantly with later development of SIRS.

The incidence of sepsis indicators among patients presenting in the emergency department occurs at a rate that warrants increased vigilance. Focused efforts to identify these indicators and respond to changes in body temperature, respiratory rate, and mental status before admission to the intensive care unit can reduce delay and assist critical care providers to manage these patients proactively. The evidence of demographic and physiological correlates of the SIRS diagnosis seen in this study sets a foundation for future studies and the development of hospital systems and models for delivery of care for patients with sepsis.

Jennifer Bond, Brenda Eden, Linda Fulk, Sherry Robinson, Larry Hughes; Memorial Medical Center, Springfield, IL

To establish a baseline for incidence and prevalence of delirium in a medical-surgical intensive care unit (ICU) through use of the Confusion Assessment Method for the ICU (CAM-ICU) and to determine whether or not nurses and physicians are able to assimilate the features of delirium and diagnose delirium during routine assessments.

Delirium is a serious clinical syndrome that affects 21%–73% of ICU patients. Linking symptoms of delirium in a timely manner and establishing a diagnosis are essential to providing positive outcomes for critically ill patients. Nurses and physicians often fail to correlate delirium symptoms with the diagnosis despite the availability of validated tools for assessment. Use of a standardized delirium assessment tool may improve nurses’ recognition of symptoms of delirium and physicians’ diagnosis of delirium.

A quantitative prospective cohort study was conducted on a 15-bed medical-surgical ICU at a Midwestern university-affiliated Magnet hospital. Data were collected on 120 consecutive admissions that met inclusion and exclusion criteria. Nursing research assistants were educated and validated in assessment of patients by using the CAM-ICU tool and the Charlson Comorbidity Index (CCI), performed daily assessments, and reviewed patients’ records daily for documentation of delirium and associated features. Descriptive statistics were used to analyze demographic data and the CCI. Staff nurse and physician documentation was analyzed for recognition of delirium features and diagnosis of delirium.

A total of 109 patients (mean age, 62 years; 58% male; mean stay, 5.11 days) completed the study. Incidence of delirium was 10% (11/109), and prevalence was 30% (33/109). No significant difference in severity of illness was found between the delirious and nondelirious patients. Physicians documented the diagnosis of delirium in 9% (3/33) of delirious patients and documented at least 1 symptom of delirium in 33% (11/33) of patients who had delirium. Nurses documented delirium symptoms in 94% of delirious patients (31/33), demonstrating a significant association (P < .001) between nurse documentation of delirium symptoms and a positive delirium screening with the CAM-ICU tool.

The incidence and prevalence of delirium in this study were lower than previously reported. Physicians’ diagnosis and documentation of symptoms of delirium was rare. Although nurses identified symptoms of delirium 94% of the time, without the use of an assessment tool, they were unable to correlate symptoms and communicate the diagnosis of delirium to physicians. Recognition of delirium by the interdisciplinary team can be greatly improved through nursing implementation of a validated assessment tool.

Janet Palamone, Linda Morris, Susan Brunovsky, Matt Groth, Mary J. Kwasny; Northwestern Memorial Hospital, Chicago, IL

Neurosurgical patients tend to have the highest rate of deep vein thrombosis (DVT) among postsurgical patients. Although early interventions are recommended in intensive care units (ICU), many factors can make this the most neglected time for aggressive prevention measures. Our hypothesis proposes that, without changing any of the current measures to prevent DVT, a structured program of foot and ankle range-of-motion (ROM) exercises will decrease the incidence of DVT in patients in the neuroscience intensive care unit.

For large university teaching hospitals, the target of the University Health System Consortium was to achieve a DVT rate at half our rate. Currently in the US, 200 000 to 400 000 people are afflicted with a DVT, one-third of them have a chronic disabling condition known as postthrombotic syndrome.

This experimental study, approved by the institutional review board, examined 315 patients over the age of 18 who were admitted to the neuroscience ICU and who received the foot exercises as a method of DVT prevention. Data for the outcome measures were derived from bedside measurement of Doppler ultrasound images of the lower extremity, the percentage of time the exercises were performed, the patient’s history, and standard DVT prevention measures.

Overall, there was no difference in DVT rate during the study period in 2008 and 2009. However, patients in whom a DVT developed had a significantly lower compliance rate with the ROM exercises (38.7%) than did patients in whom a DVT did not develop (58.4%; P < .001).

Foot and ankle ROM exercises may have a promising role in reducing incidence of DVT in patients in neuroscience ICUs when the exercises are done diligently. Early mobilization improves outcomes and ROM exercises improve blood flow in the lower extremities. Nurses are the key to incorporating these exercises into the daily plan of care and to the success in reducing DVT rate.

Caroline Arbour, Céline Gélinas; McGill University School of Nursing, Montreal, Quebec

To examine the usefulness of measuring regional cerebral oxygenation (rSo2) with near-infrared spectroscopy (NIRS) to detect pain in critically ill adults during common procedures in the intensive care unit (ICU). More specifically, changes in rSo2 during a painful procedure (eg, removal of a mediastinal chest tube) were examined and were compared with the patient’s self-report of pain and pain-related behaviors.

Although assessment of behaviors is recommended for detecting pain in nonverbal patients, behavioral assessment is useless in heavily sedated and paralyzed patients. The time has come to explore other pain indicators. NIRS is a noninvasive technique of measuring regional cerebral oxygenation. It was first tested for the purpose of pain assessment in neonates and adults undergoing cardiac surgery. Cerebral oxygenation was found to increase during nociceptive procedures known to be painful.

A prospective repeated-measures design was used and 32 postoperative patients in the cardiac surgery ICU participated. Patients were observed during a 1-minute period at rest, during removal of a mediastinal chest tube, and 15 minutes after the procedure. Continuous measurement of rSo2 was recorded with the NIRS device (INVOS 5100). Behaviors were rated with the Critical-Care Pain Observation Tool (CPOT), and patients were asked to rate their level of pain by using a numeric scale from 0 to 10 at each assessment. Analgesic agents that were administered within 4 hours before the procedure were also collected. Descriptive statistics were calculated for all variables, and t tests were performed.

Patients were mostly males (74%) with a mean age of 63 years and were mainly admitted for coronary artery bypass graft surgery (67%). A total of 26 patients (81%) received a dose of morphine within 4 hours before the procedure. The rSo2 value was significantly different during removal of the mediastinal chest tube compared with the baseline value at rest and the value after the procedure (t tests; P < .001). As opposed to a decrease in rSo2 (<1%) in patients who did receive morphine, an increase of 1.5% in rSo2 was observed in patients who did not receive morphine. Patient’s self-report of pain intensity increased from 2 (at rest) to 5 out of 10 during the procedure, and CPOT scores also increased from 0.70 to 5 out of 10.

As expected, changes in rSo2, higher pain intensity, and behavioral reactions were observed during the painful procedure. However, as opposed to previous research with NIRS, rSo2 increased only slightly in patients who did not receive morphine. According to its analgesic property, morphine may help lessen regional cerebral blood flow resulting from the cortical activation associated with pain. The NIRS is new in the field of pain and its usefulness remains to be studied.

Miriam Alices, Janie Heath, Sharon Bennett, Thomas Joshua; Medical College of Georgia, Augusta, GA

Quality core measures by Joint Commission and Centers for Medicare and Medicaid include tobacco cessation standards. Given the documented health risk of tobacco use; is important for health professionals to assess tobacco status and provide cessation intervention. The purpose of the study was to examine perceptions, knowledge, self-confidence, and intentions related to tobacco cessation among health care providers to intervene with adult, hospitalized tobacco-dependent patients with respiratory disorders.

Tobacco use is one of the chief avoidable causes of illness and death in our society, affecting quality of life for patients with respiratory diseases. Tobacco cessation interventions are effective and are associated with improvement in morbidity and mortality and reduction in health care cost. Quality improvement activities are needed to enhance implementation of tobacco cessation interventions by health care practitioners for these vulnerable populations.

The Theory of Reasoned Action (TRA) guided the study. A 1-group pretest/posttest design was used. The Rx for Change: Clinician Assisted Tobacco Cessation Continuing Education Survey (51 items) was distributed to 79 health care providers of a large academic medical center before and after an educational session. Descriptive statistics were calculated and Pearson correlation and paired t tests were done. TRA beliefs and their relationships with the dependent variable (intention to integrate tobacco cessation interventions into clinical practice) were determined.

Variables influencing health care providers’ intentions to provide tobacco cessation interventions were examined. P values were significant for all pairs analyzed. Self-confidence (r = 0.698; P < .001) and control beliefs variables (r = 0.661; P < .001) had the strongest correlation with intentions to provide tobacco cessation interventions. Pearson correlations were statistically significant at P < .05 levels. Knowledge items fluctuated from 34% to 89.5% for presurvey and from 61% to 99% for postsurvey. Overall an increase was observed in the mean for intentions to provide tobacco cessation interventions (mean before session, 3.42; mean after session, 4.10 on a scale of 1–5).

Training of health care professionals in brief intervention methods affects their performance. This project implementation contributes to the body of knowledge on understanding and prediction of health care behaviors. There are challenges to integrating tobacco cessation interventions into the daily practice of acute care practitioners; however, evidence-based strategies are available. Opportunities to increase awareness and knowledge of evidence-based tobacco cessation interventions are available.

Susan A. Walsh, Cecelia Gatson Grindel, Barbara Woodring, Lori Schumacher, W. Todd Maddox; Georgia State University, Atlanta, GA

Nearly 1 million new and recurrent myocardial infarctions occur each year, with 10% of hospitalized patients having unrecognized ischemic symptoms. Inexperienced nurses are expected to recognize myocardial infarction, yet are less able to classify symptom cues and reach accurate conclusions than are experienced nurses. The purpose of this study was to test an educational intervention that uses theories of pattern recognition to develop critical thinking about myocardial infarction and improve nursing students’ clinical decision making and clinical judgment by using high-fidelity simulations of patients.

The aging at-risk population of baby boomers will strain the health care system with its requirements for complex care. The convergence of retiring nurses and increased demand for bedside clinicians augurs an influx of inexperienced nurses to the bedside. Inexperienced nurses are less able to think critically and achieve accurate diagnostic conclusions, potentially compromising patients’ safety. The possibility for missed myocardial infarction or inappropriate interventions due to underdeveloped critical thinking processes is a consistent concern in health care.

This study used a quasi-experimental, 3-group pretest/posttest design and qualitative data to triangulate information on critical thinking, clinical decision making, and clinical judgment with respect to patients with myocardial infarction. A sample of 54 junior-year students in the baccalaureate nursing program at a large metropolitan university were divided into pairs and randomized to 1 of 2 control groups (simulation and nonsimulation) and an experimental intervention simulation group. Data were collected on pattern recognition in myocardial infarction, critical thinking in myocardial infarction, and self-perception of clinical decision making. Diagnostic efficiency and accuracy were measured. Triangulation on clinical decision making with semistructured interviews using a “thinking aloud” technique was conducted. Data were analyzed as qualitative data and compared among groups.

Participants given prototypes for myocardial infarction using simulation significantly improved in pattern recognition (t(2) = 3.153, p = .04). Students receiving a non–myocardial infarction scenario and feedback-based debriefing had the greatest gains in clinical reasoning, which included development of clinical decision making by using analytic hypothetico-deductive and Bayesian reasoning processes and learned avoidance of heuristics. Students learned to identify salient symptom cues, analyzed data in a more complex manner, and reflected on their experience in a way that indicated improved learning. Students who were given simulation scenarios that involved only myocardial infarction developed deleterious heuristics and showed fewer gains in clinical reasoning, although students in both simulation groups showed greater critical thinking ability than did students in the non-simulation control group.

Findings support the use of simulation to improve recognition of patterns of myocardial infarction, clinical reasoning, and clinical decision making. The results also emphasize the significance of simulation scenario construction and debriefing to achieving learning outcomes. The findings could be used to guide further research on the use of simulation to improve critical thinking, clinical decision making, and clinical judgment in nursing students. Sponsored by: AACN and Philips Medical Systems.

Mary Fran Tracy, Linda Chlan, Kay Savik; University of Minnesota Medical Center, Fairview, Minneapolis, MN

To determine the contributions of known correlates to peripheral muscle weakness in patients receiving mechanical ventilation in the intensive care unit (ICU).

In patients receiving mechanical ventilation, focus is increasingly on improving weaning from ventilator support, optimizing sedative use, and increasing early mobility in order to decrease ventilator days and avoid deconditioning. Immobility leads to increased muscle weakness, which may affect the patient’s ability to be weaned from ventilator support and recover physically. However, little is known about which specific risk factors contribute most to weakness in ventilator-dependent ICU patients.

A descriptive, correlational study with a sample of 95 patients receiving mechanical ventilation in 5 ICUs in the urban Midwest. Patients were followed up to extubation or to a maximum of 30 days after enrollment in the study. Hand dynamometry, a measure of peripheral muscle strength, was performed daily with a Jamar device to measure grip strength. Additional data were collected: demographics, scores on the Acute Physiology and Chronic Health Evaluation (APACHE) III, and use of sedatives, neuromuscular blockers, insulin infusions, and steroids. Sedation frequency was computed as the total number of sedatives administered in a 4-hour period summed over the day. This count did not consider specific doses.

The sample was 50% male, with a mean age of 60.1 years (SD, 15.1 years). Patients received a mean (SD) of 9 (7.8) days of mechanical ventilation before enrollment and 7.1 (7) days during the study. Patients received sedatives a median of 100% (range, 25%–100%) of study days. Mean (SD) grip strength was 12.3 (17.9) lbs-force at baseline. Mixed models analysis was used to examine known correlates of muscle weakness. A level 1 model indicated significant unexplained variance in grip strength over time (z = 5.41; P < .001) and initial grip strength (z = 5.37; P < .001), warranting further analysis. Level 2 modeling indicated sedation frequency [β−.31; SD, 0.14; p=.03], female gender [β−7.3; SD, 3.0; p=.02], age [β−.28; SD, 0.28; p=.01], and days enrolled [β−.45; SD, 0.12; p=.0004] explained a significant amount of variance in grip strength over time.

Age, female sex, prolonged ventilator support, and frequency of sedative administration contributed to diminished grip strength in this sample. Interventions are needed that reduce immobility and weakness. Sedative regimens that manage symptoms yet reduce administration frequency are needed. Nursing contributions to care regimens that manage symptoms in this population are important, including appropriate administration of sedatives and interventions to maintain muscle strength.

Elizabeth Bridges, Catherine Kirkness, Karen Evers; Travis Air Force Base–60th Medical Group/Clinical Investigations Facility, Travis Air Force Base, CA

This study, which is the largest analysis of the transport of critically injured casualties from Iraq and Afghanistan, had the following purposes: (1) to describe characteristics and care of casualties transported by US Air Force Critical Care Air Transport Teams (CCATT) from October 2001 to May 2006, (2) to describe the incidence of en route clinical deterioration and equipment challenges, and (3) to determine on the basis of the occurrence of en route clinical deterioration if there is an optimal time to transport after injury.

Military casualties are more severely injured than is usually seen in US civilians with traumatic injuries. Additionally military casualties undergo long transports: 7000 miles in less than 5 days during the stabilizing phase of their injuries. In US hospitals, transport increases the risk for clinical deterioration. No systematic analysis of initial to definitive care of these patients, including CCATT transport, had been done. Of interest is en route safety and determining an optimal time to transport on the basis of clinical status.

Retrospective review of 2439 critically injured or ill patients (3492 CCATT transports). For a subset of 236 patients (83% with battle injuries) with CCATT records (276 transports), additional data were collected from medical records from the area of responsibility and in Germany and the United States. Median time from injury to transport from the area of responsibility to Landstuhl, Germany (LRMC) is 1 day and from LRMC to the United States is 4 days. Each patient has 2500 plus data points to describe their care across the continuum (from the area of responsibility to the United States), including cause of injury (improvised explosive device, gunshot wound), injury severity (Injury Severity Score or score on Acute Physiology and Chronic Health Evaluation II), equipment (mechanical ventilator, intracranial pressure and arterial pressure monitors), and physiological status.

Polytrauma (47%) with severe injury (Injury Severity Score: mean, 17; SD, 10/score on Glasgow Coma Scale: mean, 9.5; SD, 5). Injury type: head, 64%; extremity, 54%; thorax, 32%; abdomen, 10%. Cause: improvised explosive device, 54%; fragment, 27%; gunshot wound, 22%. Mechanical ventilator, 73%; arterial pressure monitor, 65%. Hemoglobin <10 g/dL (34%). Head trauma: intracranial pressure monitor, 59%; intracranial pressure > 20 mm Hg (27%); cerebral perfusion pressure <60 mm Hg (29%). Body temperature >38°C within 24–72 hours after injury (65%). Pulmonary compromise (Pao2/Fio2 < 300): head (61%)/extremity (43%) trauma. Deterioration on 29% of flights: hypotension (13%), increased need for mechanical ventilation/oxygen (5%), intracranial pressure > 20 mm Hg (4%). Risk factors for en route deterioration: initial acuity (r = 0.3), base deficit (r = −0.5), lowest Spo2 (r = .52), hematocrit after resuscitation (r = −0.73); no correlation with time to transport or injury type.

Despite the patients’ higher acuity, deterioration during the 7000-mile transport of these critically injured casualties is lower than during civilian transport. Transport during the acute phase of injury appears safe. Areas for study: validate factors predicting en route deterioration; identify lung blast injury in patients without overt thoracic trauma; study methods to avoid volume overload in these casualties; and study en route hyperthermia control and optimizing brain perfusion in head trauma casualties.


Presented at the AACN National Teaching Institute in Chicago, Illinois, May 2011.