Your Bouncebacks Are Not Alone

“Remember that patient you had yesterday?” is infrequently a favorable start to a conversation.  Emergency Department bouncebacks are frequently tracked metric, ostensibly for self-reflection, but also as a proxy for care quality and mismanagement.

This is a 6 state review of 2 to 5 years of data linked between State Emergency Department Databases and State Inpatient Databases, evaluating Emergency Department recidivism up to 30 days.  The authors also linked this data to healthcare cost data, but highest quality cut of meat here is the detail on bouncebacks.  Based on 53,530,443 Emergency Department visits, these authors found the overall 3-day revisit rate was 8.2%, and the 30-day revisit rate was 19.9%.  Approximately 2/3rds of revisits were to the same Emergency Department, with the remainder choosing a different ED.

These numbers, I think, are much higher than most would expect – and provide at least a small amount of solace if you feel as though it seems there’s always a previous patient of yours checking back into the ED.  The authors break down several interesting details regarding the types of revisits:

  • Skin and soft-tissue infections resulted in 23.1% 3-day revisit rates, with 12.9% admission on revisit.
  • Abdominal pain was the second-most frequent revisit, at 9.7%, associated with 29.9% admission on revisit.
  • Patients aged 18-44 were more likely to visit a different ED for the second visit, while patients aged 65 and above were the most likely to be admitted on revisit.
  • Patient with back pain had the highest revisit rate to a different ED within 3 days, 7.8%, with 41% of those visiting a different ED.

Simply at face value, these additional visits are expensive and resource-intensive – particularly if there’s not an effective local electronic information exchange preventing duplication of testing.  There is also clearly ample opportunity to develop targeted interventions for certain groups of patients to potentially provide follow-up care in a lower-cost setting.

“Revisit Rates and Associated Costs After an Emergency Department Encounter”
http://www.ncbi.nlm.nih.gov/pubmed/26030633

Why Do We Still Admit Chest Pain?

If you worked a shift today, you had a patient with chest pain.  As these authors cite in their introduction, visits for chest pain comprise 1 in 20 presentations to Emergency Departments – and the evaluation of such patients costs more than the annual GDP of Malta.  As our hospitalist colleagues lament, a massive subset of inpatient evaluations for chest pain are invariably negative – or, even worse, generate false positives and other iatrogenic harms.

This study is an retrospective evaluation of an observational registry of chest pain presentations to three Ohio Emergency Departments.  The authors perform a search of five years worth of data, and generate a cohort consisting of patients who received at least two consecutive negative troponins initiated in the Emergency Department.  The primary outcome was in-hospital life-threatening arrhythmia, STEMI, cardiac arrest, or death.

In this database of 45,416 patients, 11,230 met their inclusion criteria.  Independent, hypothesis-blinded abstractors reviewed a subset of “possible” primary outcomes based on electronic data, and manually abstracted those identified.  From this manual review, there were 20 (0.18%) patients for whom a critical outcome was identified.  The authors reviewed each specific case and tried to identify specific risks for adverse outcome – and, if patients with abnormal vital signs, left bundle branch block, pacemaker rhythm, or signs of EKG ischemia were further excluded, the incidence of critical outcomes drops to 4 out of 7,266 (0.06%).

The supposed takeaway from this article is that patients who have been ruled out by serial troponin testing have uneventful hospital courses.  Extending this to practice, the theory is we could perhaps generalize this evidence to our 1- or 2-hour rapid-biomarker rule-outs.  These patients would then supposedly have such an acceptable safety profile as could be discharged from the ED with outpatient follow-up to assess the need, or appropriateness, for further provocative or anatomic testing.

These data are not quite strong enough to claim such a strategy as bulletproof.  The risk, I agree, is certainly small – with thousands requiring hospitalization in order to obtain benefit for one patient.  The benefit, however, for the patients in this study is not the soft MACE outcome described in other studies – these are hard endpoints of folks who would likely be dead if not observed in the hospital.

While I expect outpatient evaluation of substantial numbers of chest pain patients to be the new culture in Emergency Medicine in the future – and as much as I would like to purchase Malta for ACEP next year – this isn’t zero-miss.  These data support development of appropriate outpatient strategies – but not wholesale practice revision based solely on this data set.

Addendum: Louise Cullen makes a few excellent points on social media peer review I’ll paraphrase here: 1) The endpoints measured here are not the only important patient-oriented outcomes.  There are a small number of initially troponin-negative acute coronary syndromes that may be missed here. 2) There are patients for whom hospitalization and urgent evaluation has value due to medical interventions initiated in the hospital.  An aggressive discharge strategy cannot be based on a catch-and-release foundation without tightly integrated follow-up.

“Risk for Clinically Relevant Adverse Cardiac Events in Patients With Chest Pain at Hospital Admission”
http://archinte.jamanetwork.com/article.aspx?articleid=2294235