Each July, teaching hospitals in the United States experience an influx of new resident and fellow physicians. It has been theorized that this occurrence may be associated with increased patient mortality, complication rates, and health care resource use, a phenomenon known as the “July effect.”
To assess the existence of a July effect in clinical outcomes of patients with acute respiratory distress syndrome (ARDS) receiving mechanical ventilation in the intensive care unit in US teaching hospitals.
The National Inpatient Sample database was queried for all adult patients with ARDS who received mechanical ventilation from 2012 to 2014. Using a multivariate difference-in-differences (DID) model, differences in mortality, ventilator-associated pneumonia, iatrogenic pneumothorax, central catheter–associated bloodstream infection, and Clostridium difficile infection were compared between teaching and nonteaching hospitals during April-May and July-August.
There were 70 535 and 43 175 hospitalizations meeting study criteria in teaching and nonteaching hospitals, respectively. Multivariate analyses revealed no differential effect on the rates of all-cause inpatient mortality (DID, 0.66; 95% CI, −0.42 to 1.75), C difficile infection (DID, 0.29; 95% CI, −0.19 to 0.78), central catheter–associated bloodstream infection (DID, 0.14; 95% CI, −0.04 to 0.33), iatrogenic pneumothorax (DID, 0.00; 95% CI, −0.25 to 0.24), ventilator-associated pneumonia (DID, 0.22; 95% CI, −0.05 to 0.49), and any complication (DID, 0.60; 95% CI, −0.01 to 1.20) for July-August versus April-May in teaching hospitals compared with nonteaching hospitals.
This study did not show a differential July effect on mortality outcomes and complication rates in ARDS patients receiving mechanical ventilation in teaching hospitals compared with nonteaching hospitals.
During the month of July, US teaching hospitals experience an influx of new resident and fellow physicians with the beginning of a new academic year. It has been hypothesized that the relative inexperience of this new cohort of physicians may be associated with increased patient mortality, complication rates, and health care resource use, a phenomenon that is recognized globally and is known in the United States as the “July effect.”1
Most of the studies that have assessed the July effect in US hospitals did not account for differences in predicted in-hospital mortality risk: low-risk patients may be less influenced by inexperienced trainees than high-risk patients are influenced.2-4 Jena et al5 examined the July effect phenomenon in patients with acute myocardial infarction stratified by level of disease severity and found that mortality rates among high-risk patients were lower in teaching-intensive hospitals in May than in July.
Patients admitted to the intensive care unit (ICU) are judged to be at the highest risk of inpatient mortality. Although previous studies did not demonstrate a July effect on mortality and length of hospital stay in the ICU population, their findings were limited by small sample sizes, heterogeneity of the patient populations, and unadjusted variations in disease severity.6 The mortality rate of patients with acute respiratory distress syndrome (ARDS) has remained high despite recent advances in critical care therapeutic interventions.7,8 ARDS is associated with prolonged mechanical ventilation and ICU stay, both of which increase the risks of ventilator-associated pneumonia, Clostridium difficile colitis, central catheter–associated bloodstream infection (CLABSI), and procedural complications. These patients require a high level of critical care expertise and are thus likely to be affected by physicians’ level of experience.
We applied the disease- and severity-specific approach of multivariate modeling with covariate adjustment that has been previously used in other studies5 to evaluate the July effect in patients with ARDS who were receiving mechanical ventilation. We hypothesized that patients with ARDS who received mechanical ventilation in teaching hospitals would experience a July effect, with increased mortality rates and complications compared with patients in nonteaching hospitals.
We queried the National Inpatient Sample (NIS) database to identify a nationally representative sample of patients with ARDS admitted to US hospitals from 2012 to 2014. The NIS database contains a 20% stratified random sample of all US hospital discharges. Because the NIS is a publicly available deidentified data set, this study was exempted from institutional review board approval as defined at 45 CFR 46.102D. Data use was in accordance with the NIS data agreement.
We included in our analysis all patients aged 18 years or older who were admitted to the hospital with ARDS and received mechanical ventilation. We defined these patients as those with International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes 518.82, 518.51, 518.52, 518.53, 518.81, 518.84, 518.7, 518.4, 861.20, 785.52, and 995.92 and ICD-9 procedure codes 96.70, 96.71, and 96.72.1. This method of definition has been used in other studies.7-13 We excluded patients with cardiogenic pulmonary edema (ICD-9 codes 428.0, 428.21, 428.23, 428.33, 428.31, 428.41, and 428.43) and patients with incomplete information on the month of admission, age, race, teaching hospital status, and inpatient mortality. ARDS admissions during the months of April and May were compared with those in July and August. We avoided longer time frames to minimize differences in patient outcomes that may arise from seasonal variation.
Patients with ARDS require a high level of critical care expertise and are thus likely to be affected by physicians’ level of experience.
This analysis involved patient demographic and clinical data including age, sex, race, month and year of hospital admission, length of stay, primary and secondary diagnoses and procedures, disposition (eg, inpatient death), median household income (national quartile for patient’s zip code), and insurance status. It also included the hospital characteristics of teaching hospital status, geographic region of the hospital, and hospital size by number of beds.
The primary outcome was all-cause inpatient mortality. The secondary outcomes were rates of iatrogenic pneumothorax (ICD-9 code 512.1), ventilator-associated pneumonia (ICD-9 code 997.31), CLABSI (ICD-9 code 999.32), and C difficile infection (ICD-9 code 008.45).
We compared the baseline characteristics of admitted patients at teaching and nonteaching hospitals during April-May and July-August. Categorical variables were reported as percentages and continuous variables as means and SDs. The Pearson χ2 test and the t test were used to evaluate categorical and continuous variables, respectively. We used a difference-in-differences (DID) model to compare outcomes in April-May and July-August in teaching versus nonteaching hospitals. This approach accounts for any overall seasonal differences in outcomes that affect both teaching and nonteaching hospitals. Variables with a P value of less than .20 on univariable analyses were included in the final multivariate model.14 Survey weights as already derived in the NIS database were applied to all of our analyses to compute national estimates.
We used a difference-in-differences (DID) model to compare outcomes in April-May and July-August in teaching versus nonteaching hospitals.
We estimated a DID multivariate logistic regression model of the following form: (Logistic [Probability (Ei)] = β0 + β1Ji + β2Ti + β3Ji × Ti + β4Gi + εi). Reported odds ratios refer to the β3 coefficient in the equation, which is the interaction term reflecting the comparison of outcomes in July-August relative to April-May between teaching and nonteaching hospitals. Ei was a binary indicator variable for inpatient mortality in hospitalization i, Ji a July indicator variable, Ti an indicator variable for teaching hospital status, Ji × Ti the July indicator variable interacted with teaching hospital status (ie, July effect), Gi a vector of covariates that were adjusted for in the model, and ei the error term. We checked for parallel trends in April-May (the months preceding the introduction of new medical trainees) and confirmed that the parallel trend assumption was met through visual inspection. Analyses were conducted using Stata/IC version 15.1 (StataCorp).
A total of 113 710 unweighted hospitalizations met the inclusion criteria and were included in the final analysis. There were 70 535 hospitalizations in teaching hospitals and 43 175 in nonteaching hospitals (Table 1).
Patient Demographic Variables and Hospital Characteristics
More men than women were admitted to both teaching and nonteaching hospitals during the time periods studied; however, the differences were not statistically significant. The mean age of patients was lower in the teaching hospitals than in the nonteaching hospitals for both April-May and July-August (P < .001). In both categories of hospitals, the patients admitted with ARDS were predominantly White, followed by Black. We found differences in hospital size, hospital region, and patient health insurance between teaching and nonteaching hospitals during the study periods. On average, large nonteaching hospitals had more patients than large teaching hospitals (66% vs 60%) during the study periods. In the Northeast and Midwest, more patients were hospitalized in the teaching hospitals than in the nonteaching hospitals, whereas the reverse was noted in the South and West. In both categories of hospitals, most patients had Medicare insurance, followed by private insurance.
Primary and Secondary Outcomes
Unadjusted rates of mortality and complication outcomes during April-May and July-August in teaching versus nonteaching hospitals are shown in Table 2. The unadjusted rate of any complication in teaching hospitals was 7.4% for April-May and 8.2% for July-August compared with 5.5% for April-May and 5.7% for July-August in nonteaching hospitals. Likewise, the all-cause mortality rate in teaching hospitals was 29.3% for April-May and 29.7% for July-August compared with 28.0% for April-May and 27.7% for July-August in nonteaching hospitals.
On multivariate analyses, there was no differential effect on the rates of all-cause inpatient mortality (DID, 0.66; 95% CI, −0.42 to 1.75), C difficile infection (DID, 0.29; 95% CI, −0.19 to 0.78), CLABSI (DID, 0.14; 95% CI, −0.04 to 0.33), iatrogenic pneumothorax (DID, 0.00; 95% CI, −0.25 to 0.24), ventilator-assisted pneumonia (DID, 0.22; 95% CI, −0.05 to 0.49), and any complication (DID, 0.60; 95% CI, −0.01 to 1.20) for July-August versus April-May in teaching hospitals compared with nonteaching hospitals (Table 3).
Our study did not show a differential effect of July on mortality outcome and rates of complications in patients with ARDS receiving mechanical ventilation who were admitted to teaching hospitals compared with nonteaching hospitals. These findings suggest that the addition of new residents and fellows to the training workforce in July may not be associated with worse outcomes in patients with ARDS. Patient care in the ICU is largely influenced by standard protocols and operating practices, which may buffer the possible effect on outcomes of knowledge gaps of new medical residents and fellows in the ICU.15 Another possible explanation for our findings may be the higher level of supervision by more experienced medical staff in ICUs.
The ICU rotation often causes the highest stress levels for relatively new and inexperienced resident and fellow physicians owing to the high acuity of patient illnesses on this unit.16 From the very first day, residents and fellows are aware of the complexity of disease processes in the ICU and the high risk of complications and mortality. Thus, they may be more likely to be proactive in seeking help and advice from more experienced colleagues with the complex decision-making and medical management required by high-risk conditions such as ARDS.
Our results are similar to those of other studies on the hypothesized July effect. Finkielman et al4 concluded that ICU admissions in July were not associated with an increased hospital mortality rate or a longer stay in the ICU. Another study that examined a patient population in more than 30 ICUs across 28 hospitals also did not find a July effect in all-cause mortality and health resource use.6 These findings may indicate that physician training early in the academic year does not have a significant effect on patient outcomes in the ICU.
A relative strength of our study is its focus on a specific high-risk population rather than evaluation of ICU patients as a homogeneous group, which does not account for varying case mix and severity. Our results are not unique among patients with critical illness. Saqib et al17 noted similar findings in patients with septic shock treated in the ICU. Likewise, High-stead et al15 demonstrated similar outcomes in patients admitted to the surgical ICU with traumatic injuries at the beginning of the academic year compared with the end of the academic year in a large, urban level I trauma center. Interestingly, Jena et al5 noted a July effect in mortality among high-risk patients with acute myocardial infarction in teaching hospitals but no such effect among low-risk patients and those treated in nonteaching hospitals.
Our study represents the first evaluation of the July effect exclusively in ARDS patients requiring mechanical ventilation. Given advances in optimal ventilator strategies and adherence to protocols, it is not surprising that we observed low rates of ventilator-associated complications, C difficile infection, and CLABSI. However, our results may also indicate low sensitivity of the coding strategies used to identify these complications. The absence of a July effect and the low rates of iatrogenic pneumothorax, which we initially presumed would be high given that new residents and fellows might not be as technically proficient in performing procedures as more experienced physicians, may again reflect the high level of supervision and oversight in the ICU.
Our study has some limitations. We relied on diagnostic codes to extract the data, which may not accurately capture the actual diagnosis or severity of illness owing to possible variations in coding practices in both teaching and nonteaching hospitals. Although institutional coding practices are most likely similar across months, the DID study design does not fully address the differences in coding between teaching and nonteaching hospitals. We used ICD-9 codes to identify complications; therefore, we could not determine the specific timing of the complications during the patients’ hospitalization course.
In addition, we could not specifically characterize ARDS severity (mild, moderate, severe), which might have affected mortality outcomes in our study population owing to a lack of relevant markers in the database. However, teaching hospitals are likely to have more severe cases compared with nonteaching hospitals, and the DID method addresses the potential case mix differences.
We could not account for variability in the level of supervision and clinical experience among attending physicians in different hospitals, and we were not able to stratify individual teaching hospitals according to the presence of ICU-specific teaching, levels of trainee involvement in the treatment of patients with ARDS, different ARDS management protocols, and hospital staffing. We used a process based on P value to determine inclusion variables for the multivariate model, which has been used in previous NIS-based studies.14 However, some evidence suggests that this method may not truly adjust for confounders. We also recognize the potential impact of seasonal variability on the incidence and severity of ARDS, as there could be more cases of viral ARDS in the cooler months of August and September. Last, because our study focused on patients with ARDS who received mechanical ventilation, the results may not be generalizable to other high-risk conditions seen in the ICU.
This study did not show any differences in all-cause mortality or complications among patients with ARDS receiving mechanical ventilation between teaching and nonteaching hospitals in the United States that were associated with the arrival of new medical trainees (resident and fellow physicians) in July. Further studies are needed to evaluate a potential July effect in other high-risk conditions treated in the ICU.
We thank the staffs of Grady Memorial Hospital, Emory School of Medicine, Atlanta, Georgia, and Grand Strand Medical Center, Myrtle Beach, South Carolina, for providing insights on the medical education of trainees in the ICU. The abstract of this article was presented as a poster at the 49th Critical Care Congress of the Society of Critical Care Medicine; February 16-19, 2020; Orlando, Florida, and was published in a supplement of Critical Care Medicine.
To purchase electronic or print reprints, contact American Association of Critical-Care Nurses, 27071 Aliso Creek Road, Aliso Viejo, CA 92656. Phone, (800) 899-1712 or (949) 362-2050 (ext 532); fax, (949) 362-2049; email, email@example.com.