Sepsis Alerts Save Lives!

Not doctors, of course – the alerts.

This is one of those “we had to do it, so we studied it” sorts of evaluations because, as most of us have experienced, the decision to implement the sepsis alerts is not always driven by pent-up clinician demand.

The authors describe this as sort of “natural experiment”, where a phased or stepped roll-out allows for some presumption of control for unmeasured cultural and process confounders limiting pre-/post- studies. In this case, the decision was made to implement the “St John Sepsis Algorithm” developed by Cerner. This algorithm is composed of two alerts – one somewhat SIRS- or inflammation-based for “suspicion of sepsis”, and one with organ dysfunction for “suspicion of severe sepsis”. The “phased” part of the roll-out involved turning on the alerts first in the acute inpatient wards, then the Emergency Department, and then the specialty wards. Prior to being activated, however, the alert algorithm ran “silently” to create the comparison group of those for whom an alert would have been triggered.

The short summary:

  • In their inpatient wards, mortality among patients meeting alert criteria decreased from 6.4% to 5.1%.
  • In their Emergency Department, admitted patients meeting alert criteria were less likely to have a ≥7 day inpatient length-of-stay.
  • In their Emergency Departments, antibiotic administration of patients meeting alert criteria within 1 hour of the alert firing increased from 36.9% to 44.7%.

There are major problems here, of course, both intrinsic to their study design and otherwise. While it is a “multisite” study, there are only two hospitals involved. The “phased” implementation not the typical different-hospitals-at-different-times, but within each hospital. They report inpatient mortality changes without actually reporting any changes in clinician behavior between the pre- and post- phases, i.e., what did clinicians actually do in response to the alerts? Then, they look at timely antibiotic administration, but they do not look at general antibiotic volume or the various unintended consequences potentially associated with this alert. Did admission rates increase? Did percentages of discharged patients receiving intravenous antibiotics increase? Did clostridium difficle infection rates increase?

Absent the funding and infrastructure to better prospectively study these sorts of interventions, these “natural experiments” can be useful evidence. However, these authors do not seem to have taken an expansive enough view of their data with which to fully support an unquestioned conclusion of benefit to the alert intervention.

“Evaluating a digital sepsis alert in a London multisite hospital network: a natural experiment using electronic health record data”

https://academic.oup.com/jamia/advance-article/doi/10.1093/jamia/ocz186/5607431