Selective vs. Universal Screening for BCVI

Chasing down cerebrovascular injury is a controversial topic. The incidence of injury to carotid or vertebral arteries following blunt trauma is extremely low, with relative rarity varying by practice setting. Because of its general infrequency, many settings utilize the “Memphis” or “Denver” screening criteria to improve the value of imaging.

These authors, however, describe their implementation of a universal screening protocol for BCVI as routine component of their “whole-body” CT for “all major adult blunt trauma activations”. The data set analyzed is a retrospective local trauma registry from their level 1 trauma center, and 4,687 activations fulfilled their inclusion criteria. The overall incidence of BCVI in their population was 2.7%, with about half of those being grade 3 or higher (pseudoaneurysm or worse).

Based on case review of these 126 patients with BCVIs, only 91 (72%) would have met the current American College of Surgeons guidelines for imaging, with a handful additional more picked up by expanded Denver criteria. The authors’ conclusion – universal screening should be considered – ties in a bit with their bias towards whole-body CT, presuming these additional detected injuries represent potential reduced downstream morbidity and mortality.

It should be clear, however, these data have somewhat limited generalizability to most of Emergency Medicine. The individuals with BCVI in their cohort suffered substantial numbers of skull base fractures, cervical spine fractures, traumatic brain injuries, and had in-hospital mortality of 12.7%. Outside the context of major trauma, universal screening for BCVI will be of limited value. For the vast majority of us, continuing to refer to the most recent EAST recommendations for selective screening remains a reasonable practice. In the narrower context of major trauma referrals, these data could inform more expansive screening protocols, while universal screening for all major trauma is still likely one step too far.

“Blunt Cerebrovascular Injury – The Case for Universal Screening”
https://journals.lww.com/jtrauma/Abstract/9000/BLUNT_CEREBROVASCULAR_INJURY___THE_CASE_FOR.97839.aspx

Telehealth Triage for EMS

We’ve all seen non-emergency patients in our Emergency Departments. Further still, we’ve seen those same non-emergency patients arrive via emergency medical transport. Per these authors, the estimated burden of non-emergency or medically unnecessary ambulance transport ranges from 33-50%.

Houston, Texas, has a program of prehospital telehealth support provided by online board-certified Emergency Physicians. This article describes their retrospective cohort from 2015 through 2017, in which 15,067 patient encounters occurred from a total of 865,000 EMS incidents. Patients were eligible for telehealth if they met certain vital sign, chief complaint, and age criteria, and could be transported via non-ambulance alternative.

The good news: nearly everyone the EPs consulted with over telehealth was diverted from ambulance utilization. Only 11.2% of patients were ultimately transported by EMS –while basically the entire remainder utilized a taxi service. The bad news: nearly everyone was still transported to an Emergency Department. Only 5.0% of patients accepted same-day or next-day referral for follow-up at an affiliated outpatient health center. The primary advantage of this service, then, is increasing the availability of the limited ambulance services resource to respond to higher-acuity patients.

There are more than a few issues at play here in our current system with regard to providing, effectively, an unpaid, low-fidelity evaluation and assuming the liability risk. However, in systems with different structures and payment models where the overall costs to the system from ambulance utilization outweigh the other costs, this has a great deal of potential. Whether these results are generalizable is a reasonable concern, although the proportion of patients being referred to non-ambulance transport is not surprising. The entry criteria for telehealth consultation substantially narrowed the eligible population specifically to those whose complaints likely did not merit emergency transport. Finally, whether the EPs needed to be involved is another matter that could potentially be solved by more robust protocols in place to defer transport.

“Telehealth Impact on Primary Care Related Ambulance Transports”

https://www.ncbi.nlm.nih.gov/pubmed/30626250

Again With The Value of CT-Diagnosed Rib Fractures

The elderly are more likely to fall. The elderly who fall are more likely to suffer rib fractures. The elderly who fall and suffer rib fractures are more likely to contract pneumonia and die. The chest x-ray is insensitive for rib fractures. So, we should always perform a CT in the elderly who fall and of whom we have suspicion for rib fractures?

This is a single-center retrospective study of 330 elderly patients, mean age 84, who presented after a fall. Each patient included in the study received a chest XR, followed by a CT of their chest. Overall, 96 patients had a rib fracture – 40 of which were seen on XR, the remainder only on CT. And, there are a number of interesting tidbits they describe in their population:

  • Neither hospital length-of-stay, ICU length-of-stay, or hospital mortality (10.3% vs. 7.3%) were (statistically) increased in those with occult rib fractures compared to those without any rib fractures.
  • These findings held true for the 63 patients with ≥2 occult rib fractures (both XR+ and XR-).
  • In patients with rib fractures seen on XR, the median number of additional rib fractures seen on CT was 2 (range 0-11).

Rates of in-hospital complications were similar between those hospitalized with rib fractures visualized on XR and those visualized only on CT. Then, in their case review, most adverse events occurring in those with occult rib fractures occurred due to associated injuries, events, or iatrogenic causes – not primarily due to the thoracic trauma itself.

This is only a small case series, and it is biased towards higher acuity – considering clinician judgement obtained CT imaging in all cases, and admission rates were nearly 90%. However, it does generally further demonstrate the low value in obtaining CT imaging to ensure no occult rib fractures are missed. An XR has low sensitivity, but these data do not support a premise of increased harm due to missed occult fractures.

“Chest CT imaging utility for radiographically occult rib fractures in elderly fall-injured patients”

https://journals.lww.com/jtrauma/Abstract/publishahead/Chest_CT_imaging_utility_for_radiographically.98414.aspx

Just Stand There! Bacterial Vaginosis Edition

There has long been considered to be a causative association between bacterial vaginosis and preterm delivery – with increasing risk of delivery when BV is identified earlier in pregnancy. Clearly, of course, early antibiotic treatment would eradicate the pathology and improve pregnancy outcomes. It just makes sense.

But, no.

In this large, multicenter trial performed in France, 84,530 pregnant women were screened before 14 weeks gestation, resulting in 5,630 diagnoses of BV. Patients deemed “low-risk” for preterm delivery were treated with one of regimens of clindamycin or placebo, while those few deemed “high-risk” were excluded from placebo randomization. The primary outcome was late miscarriage or early preterm birth, a range of preterm delivery spanning 16-32 weeks gestation.

Approximately 2/5ths of those approached for enrollment declined to participate, leaving 2,869 for randomization into one of the three low-risk arms. There were no important baseline differences between the three cohorts. The results: no difference. About 1% of each group met the primary outcome, and there were no signals of even a small magnitude of benefit to treatment with clindamycin in the low-risk cohorts. Adverse events, of course, clearly favored placebo – as befitting clindamycin’s known propensity for gastrointestinal effects, but no effects on fetal outcomes were apparent.

This is not specifically relevant to Emergency Medicine other than to demonstrate the need to rigorously test even what seems obvious. Widespread screening and aggressive, proactive treatment – even when all signs point to an expected positive result – represented low-value, and potentially harmful care.

“Early clindamycin for bacterial vaginosis in pregnancy (PREMEVA): a multicentre, double-blind, randomised controlled trial”

https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(18)31617-9/fulltext

Magical Biomarkers in TBI

Decision instruments be damned. Clinical judgment be damned. We need a test! We need a biomarker test to tell us whether we should perform a CT in traumatic brain injury!

Thus enter ubiquitin C-terminal hydrolase-L1 (UCH-L1) and glial fibrillary acidic protein (GFAP), mated together in loving embrace by Banyan Biomarkers in a prospective, observational trial – ALERT-TBI. The aim of this study was to validate these biomarkers, each with their pre-set cut-off thresholds, as accurate predictors of intracranial injuries on CT. Specifically, as accurate predictors in a convenience sample of patients presenting to one of 22 investigational sites with a GCS between 9 and 15.

These trialists collected samples on 1,977 patients, 125 of whom were “CT-positive” – meaning intracranial blood, as typical, but also “bland sheer injury … brain oedema, brain herniation, non-haemorrhagic contusion, ventricular compression, ventricular trapping, cranial fractures, depressed skull fractures, facial fractures, scalp injury, or skull base fractures.”  Only 8 of these patients ultimately underwent neurosurgical intervention.

The good news: these assays were 100% sensitive for neurosurgical lesions. The bad news: the lower bound of the 95% confidence interval is 63%. The other bad news: the specificity of the test is only ~35%, meaning it recommends CTs in two-thirds of your TBI patients. And, also: median time from injury to blood draw was 3.2 hours, meaning we can’t actually generalize these findings to potential phlebotomy in the the acute peri-injury trauma evaluation. And, we could keep going on with the bad news, to be certain, but I think we’ll stop there.

The final point to make is to note this study concluded in 2014. It is now, of course, past the midpoint of 2018. It probably goes without saying study findings with obvious advantages to their funding sponsor are not neglected for several years, nor shuffled into Lancet Neurology absent any fanfare.

Chalk this study up as yet another failed dalliance into potential biomarker use for TBI.

“Serum GFAP and UCH-L1 for prediction of absence of intracranial injuries on head CT (ALERT-TBI): a multicentre observational study”

https://www.thelancet.com/journals/laneur/article/PIIS1474-4422(18)30231-X/fulltext

Home is Where the Blood Pressure Monitor Is

This article regarding the prevalence of Emergency Department visits opens in quite the disarming fashion, noting, casually, the anecdotal impression of increased visits for elevated blood pressure detected by a home machine. That’s a nice way of saying “Every. Damn. Day.”

So, how often are these “worried well” ending up in our Emergency Department? According to these authors – we don’t know. We don’t know because yes, a little over 40% of those visiting the ED with a primary diagnosis of “hypertension” were the result of home blood pressure readings, but 37% of their cohort was “not documented”. It is difficult to interpret the source of the “not documented” – were they in the ED for another symptom? Or was there some other referral source? It’s unfortunately impossible to say. Regardless, this 40% self-referral due to home blood pressure readings dwarfs that of those who detected an elevated blood pressure at a pharmacy (8%) or MD office (13%). So, even if precision is lacking in these data, the proportion is substantial – and probably fits with our anecdotal sense.

Median blood pressure from the referring source, when available, generally exceeded the ED measurement – which was a median of 182/97 in triage. Interestingly, a 41% of patients received some sort of medication for blood pressure control while in the ED. Another 7% of patients necessitated admission – which is where this article sort of starts to get muddy. The overall intent seems to be to describe this influx of aforementioned “worried well” due to home blood pressure monitors, but a 7% admission rate is hardly trivial – and actually 78% of patients complained of some potentially related or important concurrent symptom. The most common somatic complaints were headache (38%), dizziness (30%), and chest pain (16%). This isn’t exactly a cohort of “asymptomatic hypertension”, and shouldn’t be perceived as a proxy for potentially unnecessary ED utilization.

Of course, there is the chicken and egg paradox with these symptoms – are they somatization of anxiety from the elevated blood pressure or true pathology? Considering the relative paucity of admissions from this fairly symptomatic cohort, it does not appear treating clinicians generally considered the elevated blood pressure related to important end-organ dysfunction. Then, there are the obvious limitations to their chart review and the generalization challenges from this regional catchment area in Canada. Many words later, at the least, there is one reasonable takeaway regarding ED patients with home blood pressure monitors – it is true, they’re everywhere!

“The Characteristics and Outcomes of Patients Who Make an Emergency Department Visit for Hypertension After Use of a Home or Pharmacy Blood Pressure Device”
https://www.ncbi.nlm.nih.gov/pubmed/30037583

The qSOFA Story So Far

What do you do when another authorship group performs the exact same meta-analysis and systematic review you’ve been working on – and publishes first? Well, there really isn’t much choice – applaud their great work and learn from the experience.

This is primarily an evaluation of the quick Sequential Organ Failure Assessment, with a little of the old Systemic Inflammatory Response Syndrome thrown in for contextual comparison. These studies included those in the Intensive Care Unit, hospital wards, and Emergency Departments. Their primary outcome was mortality, reported in these studies mostly as in-hospital mortality, but also 28-day and 30-day mortality.

The quick synopsis of their results, pooling 38 studies and 383,333 patients, mostly from retrospective studies, and mostly from ICU cohorts:

  • qSOFA is not terribly sensitive, particularly in the settings in which it is most relevant. Their reported overall sensitivity of 60.8% is inflated by its performance in ICU patients, and in ED patients sensitivity is only 46.7%.
  • Specificity is OK, at 72.0% overall and 81.3% in the ED. However, the incidence of mortality from sepsis is usually low enough in a general ED population the positive predictive value will be fairly weak.
  • In their comparative cohort for SIRS, which is frankly probably irrelevant because SIRS is already well-described, the expected results of higher sensitivity and lower specificity were observed.

Their general conclusion, to which I generally agree, is qSOFA is not an appropriate general screening tool. They did not add much from a further editorial standpoint – so, rather than let our own draft manuscript for this same meta-analysis and systematic review languish unseen, here is an abridged version of the Discussion section of our manuscript written by myself, Rory Spiegel, and Jeremy Faust:

This analysis demonstrates qualitatively similar findings as those observed in the original derivation study performed by Seymour et al. We find our pooled AUC, however, to be lower than the 0.81 reported in their derivation and validation cohort, as well as the 0.78 reported in two external validation cohorts. The meaning of this difference is difficult to interpret, as the clinical utility of this instrument is derived from its use as a binary cut-off, rather than an ordinal AUC. Our sensitivity and specificity from our primary analysis, respectively, compare favorably to their reported 55% and 84%. We also found qSOFA’s predictive capabilities remained robust when exposed to our sensitivity analyses. When only studies at low risk for bias were included, qSOFA’s performance improved.

While our evaluation of SIRS is limited by restricting the comparison solely to those studies which contemporaneously reported qSOFA, our results are broadly consistent with results previously reported. The SIRS criteria at the commonly used cut-off benefits from superior sensitivity for mortality in those with suspected infection, while its specificity is clearly lacking due to its impaired capability to distinguish between clinically important immune system dysregulation and normal host responses to physiologic stress. The important discussion, therefore, is whether and how to incorporate each of these tools – and others, such as the Modified Early Warning Score or National Early Warning Score – into clinical practice, guidelines, and quality measures.

The current approach to sepsis revolves around the perceived significant morbidity and mortality associated with under-recognized sepsis, favoring screening tools whose purpose is minimizing missed diagnoses. Current sepsis algorithms typically rely upon SIRS, depending on its maximal catchment at the expense of over-triage. Such maximal catchment almost certainly represents a low-value approach to sepsis, considering the in-hospital mortality of patients in our cohort with ≥2 SIRS criteria is not meaningfully different than the overall mortality of the entire cohort. The subsequent fundamental question, however, is whether qSOFA and its role in the new sepsis definitions provides a structure for improvement.

Using qSOFA as designed with its cut-off of ≥2, it should be clear its sensitivity does not support its use as an early screening tool, despite its simplicity and exclusion of laboratory measures. However, in a cohort with suspected infection and some physiologic manifestations of sepsis, e.g., SIRS, the true value of qSOFA may be in prioritizing a subgroup for early clinical evaluation. In a healthcare system with unlimited resources, it may be feasible to give each patient uncompromising evaluation and care. Absent that, we must hew towards an idealized approach, where our resources are directed towards those highest-yield patients for whom time-sensitive interventions modify downstream outcomes.

Less discussed are the direct, patient-oriented harms resulting from falsely-positive screening tools and over-enrollment into sepsis bundles. Recent data suggests benefits from shorter time-to-antibiotics administration intervals are realized primarily in critically ill patients. As such, utilization of overly sensitive tools, such as the SIRS criteria, would lead to over-triage and over-treatment, leading to potential iatrogenic harms in excess of net benefits. These harms include effects on individual and community patterns of antibiotic resistance, as exposure to broad-spectrum antibiotics leads to induction of extended-spectrum beta-lactamase resistance in gram-negative pathogens or vancomycin- and carbapenem-resistance in enterococci. Unnecessary antibiotic exposures lead to excess cases of C. difficile infections. The aggressive fluid resuscitation mandated by sepsis bundles leads to metabolic derangement and potential respiratory impairment. Further research should assess the extent of these harms, and in what measure they counterbalance those benefiting from time-sensitive interventions.

This meta-analysis has several limitations. First, we were limited by the relative dearth of high quality prospective data; most of the studies included in our analysis were retrospective. Second, we restricted our prognostic analyses to mortality alone, rather than diagnosis of sepsis. We chose to analyze only mortality because of competing sepsis definitions among expert bodies and government-issued guidelines. Among them, however, mortality is a common feature, the most objective metric, and manifestly the most important patient-centered outcome. Our analysis would not capture other important sequelae of sepsis, including amputation, loss of neurologic and/or independent function, chronic pain, and prolonged psychiatric effects of substantial critical illness. Third, we do not know whether patients included in these studies were septic on presentation, or developed sepsis later in their hospitalization. This may degrade the accuracy assessment of both SIRS and qSOFA. Fourth, while we know that qSOFA alone may miss some cases of sepsis that SIRS might detect, we do not know how many would, in reality, have been deprived of antibiotics and other necessary treatments. In other words, the fate of “qSOFA negative” patients who were evaluated and treated by physicians qualified to detect and treat critical illness via clinical acumen is not known; nor it should not be presumed that all such patients would have necessarily been deprived of timely treatment. Our analysis and comparison of SIRS is definitively incomplete, and not the most reliable estimate of its diagnostic characteristics, but provided for incidental comparison.

The prudent clinical role for qSOFA, however, is as yet undefined, and these data do not offer insight regarding its superiority to clinician judgment for determining a cohort at greatest risk for poor outcomes. Compared with SIRS, at least, those patients identified by qSOFA likely better represent the subset of patients for whom aggressive early treatment confers a particular advantage, and may drive high-value care in the sepsis arena. Future research should assist clinicians in further individualizing initial treatment of sepsis for those stratified to differing levels of risk for poor outcome, as well as to account for the iatrogenic harms and system costs.

“Prognostic Accuracy of the Quick Sequential Organ Failure Assessment
for Mortality in Patients With Suspected Infection: A Systematic Review and Meta-analysis”
http://annals.org/aim/fullarticle/2671919/prognostic-accuracy-quick-sequential-organ-failure-assessment-mortality-patients-suspected

Questioning the Benefit of Non-Invasive Testing for Chest Pain

Welcome to the fascinating world of instrumental variable analysis!

This is a retrospective cohort analysis of a large insurance claims database attempting to glean insight into the value of non-invasive testing for patients presenting to the Emergency Department with chest pain. Previous version of the American Heart Association guidelines for the evaluation of so-called “low risk” chest pain have encouraged patients to undergo some sort of objective testing with 72 hours of initial evaluation. These recommendations have waned in more recent iterations of the guideline, but many settings still routinely recommend admission and observation following an episode of chest pain.

These authors used a cohort of 926,633 unique admissions for chest pain and analyzed them to evaluate any downstream effects on subsequent morbidity and resource utilization.  As part of this analysis, they also split the cohort into two groups for comparison based on the day of the week of presentation – hence the “instrumental variable” for the instrumental variable analysis performed alongside their multivariate analysis. The authors made assumptions that individual patient characteristics would be unrelated to the day of presentation, but that downstream test frequency would. The authors then use this difference in test frequency to thread the eye of the needle as a pseudo-randomization component to aid in comparison.

There were 571,988 patients presenting on a weekday, 18.1% and 26.1% of which underwent some non-invasive testing within 2 and 30 days of an ED visit, respectively. Then, there were 354,645 patients presenting on a weekend, with rates of testing 12.3% and 21.3%. There were obvious baseline differences between those undergoing testing and those who did not, and those were controlled for using multivariate techniques as well as the aforementioned instrument variable analysis.

Looking at clinical outcomes – coronary revascularization and acute MI at one year – there were mixed results: definitely more revascularization procedures associated with exposure to non-invasive testing, no increase in downstream diagnosis of AMI. The trend, if any, is actually towards increased diagnoses of AMI. The absolute numbers are quite small, on the order of a handful of extra AMIs per 1,000 patients per year, and may reflect either the complications resulting from stenting or a propensity to receive different clinical diagnoses for similar presentations after receiving a coronary stent.  Or, owing to the nature of the analysis, the trend may simply be noise.

The level of evidence here is not high, considering its retrospective nature and dependence on statistical adjustments.  It also cannot determine whether there are longer-term consequences or benefits beyond its one-year follow-up time-frame. Its primary value is in the context of the larger body of evidence.  At the least, it suggests we have equipoise to examine which, if any, patients ought to be referred for routine follow-up – or whether the role of the ED should be limited to ruling out an acute coronary syndrome, and the downstream medical ecosystem is the most appropriate venue for determining further testing when indicated.

“Cardiovascular Testing and Clinical Outcomes in Emergency Department Patients With Chest Pain”

http://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2633257

Ottawa, the Land of Rules

I’ve been to Canada, but I’ve never been to Ottawa. I suppose, as the capital of Canada, it makes sense they’d be enamored with rules and rule-making. Regardless, it still seems they have a disproportionate burden of rules, for better or worse.

This latest publication describes the “Ottawa Chest Pain Cardiac Monitoring Rule”, which aims to diminish resource utilization in the setting of chest pain in the Emergency Department. These authors posit the majority of chest pain patients presenting to the ED are placed on cardiac monitoring in the interests of detecting a life-threatening malignant arrhythmia, despite such being a rare occurrence. Furthermore, the literature regarding alert fatigue demonstrates greater than 99% of monitor alarms are erroneous and typically ignored.

Using a 796 patients sample of chest pain patients receiving cardiac monitoring, these authors validate their previously described rule for avoiding cardiac monitoring: chest pain free and normal or non-specific ECG changes. In this sample, 284 patients met these criteria, and none of them suffered an arrhythmia requiring intervention.

While this represents 100% sensitivity for their rule, as a resource utilization intervention, there is obviously room for improvement. Of patients not meeting their rule, only 2.9% of this remainder suffered an arrhythmia – mostly just atrial fibrillation requiring pharmacologic rate or rhythm control. These criteria probably ought be considered just a minimum standard, and there is plenty of room for additional exclusion.

Anecdotally, not only do most of our chest pain patients in my practice not receive monitoring – many receive their entire work-up in the waiting room!

“Prospective validation of a clinical decision rule to identify patients presenting to the emergency department with chest pain who can safely be removed from cardiac monitoring”
http://www.cmaj.ca/content/189/4/E139.full

A qSOFA Trifecta

There’s a new sepsis in town – although, by “new” it’s not very anymore. We’re supposedly all-in on Sepsis-3, which in theory is superior to the old sepsis.

One of the most prominent and controversial aspects of the sepsis reimagining is the discarding of the flawed Systemic Inflammatory Response Syndrome criteria and its replacement with the Quick Sequential Organ Failure Assessment. In theory, qSOFA replaces the non-specific items from SIRS with physiologic variables more closely related to organ failure. However, qSOFA was never prospectively validated or compared prior to its introduction.

These three articles give us a little more insight – and, as many have voiced concern already, it appears we’ve just replaced one flawed agent with another.

The first article, from JAMA, describes the performance of qSOFA against SIRS and a 2-point increase in the full SOFA score in an ICU population. This retrospective analysis of 184,875 patients across 15 years of registry data from 182 ICUs in Australia and New Zealand showed very little difference between SIRS and qSOFA with regard to predicting in-hospital mortality. Both screening tools were also far inferior to the full SOFA score – although, in practical terms, the differences in adjusted AUC were only between ~0.69 for SIRS and qSOFA and 0.76 for SOFA. As prognostic tools, then, none of these are fantastic – and, unfortunately, qSOFA did not seem to offer any value over SIRS.

The second article, also from JAMA, is some of the first prospective data regarding qSOFA in the Emergency Department. This sample is 879 patients with suspected infection, followed for in-hospital mortality or ICU admission. The big news from this article is the AUC for qSOFA of 0.80 compared with the 0.65 for SIRS or “severe sepsis”, as defined by SIRS plus a lactate greater than 2mmol/L. However, at a cut-off of 2 or more for qSOFA, the advertised cut-off for “high risk”, the sensitivity and specificity were 70% and 79% respectively.

Finally, a third article, from Annals of Emergency Medicine, also evaluates the performance characteristics of qSOFA in an Emergency Department population. This retrospective evaluation describes the performance of qSOFA at predicting admission and mortality, but differs from the JAMA article by applying qSOFA to a cross-section of mostly high-acuity visits, both with and without suspected infection. Based on a sample of 22,350 ED visits, they found similar sensitivity and specificity of a qSOFA score of 2 or greater for predicting mortality, 71% and 74%, respectively. Performance was not meaningfully different between those with and without infection.

It seems pretty clear, then, this score doesn’t hold a lot of value. SIRS, obviously, has its well-documented flaws. qSOFA seems to have better discriminatory value with regards to the AUC, but its performance at the cut-off level of 2 puts it right in a no-man’s land of clinical utility. It is not sensitive enough to rely upon to capture all patients at high-risk for deterioration – but, then, its specificity is also poor enough using it to screen the general ED population will still result in a flood of false positives.

So, unfortunately, these criteria are probably a failed paradigm perpetuating all the same administrative headaches as the previous approach to sepsis – better than SIRS, but still not good enough. We should be pursuing more robust decision-support built-in to the EHR, not attempting to reinvent overly-simplified instruments without usable discriminatory value.

“Prognostic Accuracy of the SOFA Score, SIRS Criteria, and qSOFA Score for In-Hospital Mortality Among Adults With Suspected Infection Admitted to the Intensive Care Unit”

http://jamanetwork.com/journals/jama/article-abstract/2598267

“Prognostic Accuracy of Sepsis-3 Criteria for In-Hospital Mortality Among Patients With Suspected Infection Presenting to the Emergency Department”

http://jamanetwork.com/journals/jama/fullarticle/2598268

“Quick SOFA Scores Predict Mortality in Adult Emergency Department Patients With and Without Suspected Infection”

http://www.annemergmed.com/article/S0196-0644(16)31219-7/fulltext