The Door-to-Lasix Quality Measure

Will [door-to-furosemide] become the next quality measure in modern HF care? Though one could understand enthusiasm to do so ….


No one would understand such enthusiasm, despite the hopeful soaring rhetoric of the editorial accompanying this article. That enthusiasm will never materialize.

The thrills stacked to the ceiling here are based on the data in the REALITY-AHF registry, a multi-center, prospective, observational cohort designed to collect data on treatments administered in the acute phase of heart failure treatment in the Emergency Department.  Twenty hospitals, mixed between academic and community, in Japan participated.  Time-to-furosemide, based on the authors’ review of prior evidence, was prespecified as particular data point of interest.

They split their cohort of 1,291 analyzed patients between “early” and “non-early” furosemide administration, meaning within 60 minutes of ED arrival and greater than 60 minutes. Unadjusted mortality was 2.3% in the early treatment group and 6% in the non-early – and similar, but slightly smaller, differences persisted after multivariate adjustment and propensity matching. The authors conclude, based on these observations, the association between early furosemide treatment and mortality may be clinically important.

Of course, any observational cohort is not able to make the leap from association to causation.  It is, however, infeasible to randomize patients with acute heart failure to early vs. non-early furosemide – so this is likely close to the highest level of evidence we will receive.  As always, any attempt at adjustment and propensity matching will always be limited by unmeasured confounders, despite incorporating nearly 40 different variables. Finally, patients with pre-hospital diuretic administration were excluded, which is a bit odd, as it would make for an interesting comparison group on its own.

All that said, I do believe their results are objectively valid – if clinically uninterpretable. The non-early furosemide cohort includes both patients who received medication in the first couple hours of their ED stay, as well as those whose first furosemide dose was not given until up to 48 hours after arrival.  This probably turns the heart of the comparison into “appropriately recognized” and “possibly mismanaged”, rather than a narrow comparison of simply furosemide, early vs. not.  Time may indeed matter – but the heterogeneity of and clinical trajectory of patients treated between 60 minutes and 48 hours after ED arrival defies collapse into a dichotomous “early vs. non-early” comparison.

And this certainly ought not give rise to another nonsensical time-based quality metric imposed upon the Emergency Department.

“Time-to-Furosemide Treatment and Mortality in Patients Hospitalized With Acute Heart Failure”

Blood Cultures Save Lives and Other Pearls of Wisdom

It’s been sixteen years since the introduction of Early Goal-Directed Therapy in the Emergency Department. For the past decade and a half, our lives have been turned upside-down by quality measures tied to the elements of this bundle. Remember when every patient with sepsis was mandated to receive a central line? How great were the costs – in real, in time, and in actual harms from these well-intentioned yet erroneous directives based off a single trial?

Regardless, thanks to the various follow-ups testing strict protocolization against the spectrum of timely recognition and aggressive intervention, we’ve come a long way. However, there are still mandates incorporating the vestiges of such elements of care –such as those introduced by the New York State Department of Health. Patients diagnosed with severe sepsis or septic shock are required to complete protocols consisting of 3-hour and 6-hour bundles including blood cultures, antibiotics, and intravenous fluids, among others.

This article, from the New England Journal, looks retrospectively at the mortality rates associated with completion of these various elements. Stratified by time-to-completion following initiation of the 3-hour bundle within 6 hours of arrival to the Emergency Department, these authors looked at the mortality associations of the bundle elements.

Winners: obtaining blood cultures, administering antibiotics, and measuring serum lactate
Losers: time to completion of a bolus of intravenous fluids

Of course, since blood cultures are obtained prior to antibiotic administration, these outcomes are co-linear – and they don’t actually save lives, as facetiously suggested in the post heading. But, antibiotic administration was associated with a fraction of a percent of increased mortality per hour delay over the first 12 hours after initiation of the bundle. Intravenous fluid administration, however, showed no apparent association with mortality.

These data are fraught with issues, of course, relating to their retrospective nature and the limitations of the underlying data collection. Their adjusted model accounts for a handful of features, but there are still potential confounders influencing mortality of those who received their bundle completion within 3 hours as compared to those who did not.  The differences in mortality, while a hard and important endpoint, are quite small.  Earlier is probably better, but the individual magnitude of benefit will be unevenly distributed around the average benefit, and while a delay of several hours might matter, minutes probably do not.  The authors are appropriately reserved with their conclusions, however, only stating these observational data support associations between mortality and antibiotic administration, and do not extend to any causal inferences.

The lack of an association between intravenous fluids and mortality, however, raises significant questions requiring further prospective investigation. Could it be, after these years wandering in the wilderness with such aggressive protocols, the only universally key feature is the initiation of appropriate antibiotics? Do our intravenous fluids, given without regard to individual patient factors, simply harm as many as they help, resulting in no net benefit?

These questions will need to be addressed in randomized controlled trials before the next level of evolution in our approach to sepsis, but the equipoise for such trials may now exist – to complete our journey from Early Goal-Directed to Source Control and Patient-Centered.  The difficulty will be, again, in pushing back against well-meaning but ill-conceived quality measures whose net effect on Emergency Department resource utilization may be harm, with only small benefits to a subset of critically ill patients with sepsis.

“Time to Treatment and Mortality during Mandated Emergency Care for Sepsis”

Correct, Endovascular Therapy Does Not Benefit All Patients

Unfortunately, that headline is the strongest takeaway available from these data.

Currently, endovascular therapy for stroke is recommended for all patients with a proximal arterial occlusion and can be treated within six hours. The much-ballyhooed “number needed to treat” for benefit is approximately five, and we have authors generating nonsensical literature with titles such as “Endovascular therapy for ischemic stroke: Save a minute—save a week” based on statistical calisthenics from this treatment effect.

But, anyone actually responsible for making decisions for these patients understands this is an average treatment effect. The profound improvements of a handful of patients with the most favorable treatment profiles obfuscate the limited benefit derived by the majority of those potentially eligible.

These authors have endeavored to apply a bit of precision medicine to the decision regarding endovascular intervention. Using ordinal logistic regression modeling, these authors used the MR CLEAN data to create a predictive model for good outcome (mRS score 0-2 at 90 days). These authors subsequently used the IMS-III data as their validation cohort. The final model displayed a C-statistic of 0.69 for the ordinal model and 0.73 for good functional outcome – which is to say, the output is closer to a coin flip than a informative prediction for use in clinical practice.

More importantly, however, is whether the substrate for the model is anachronistic, limiting its generalizability to modern practice. Beyond MR CLEAN, subsequent trials have demonstrated the importance of underlying tissue viability using either CT perfusion or MRI-based selection criteria when making treatment decisions. Their model is limited in its inclusion of just a measure of collateral circulation on angiogram, which is only a surrogate for potential tissue viability. Furthermore, the MR CLEAN cohort is comprised of only 500 patients, and the IMS-III validation only 260. This sample is far too small to properly develop a model for such a heterogenous set of patients as those presenting with proximal cerebrovascular occlusion. Finally, the choice of logistic regression can be debated, simply from a model standpoint, given its assumptions about underlying linear relationships in the data.

I appreciate the attempt to improve outcomes prediction for individual patients, particularly for a resource-intensive therapy such as endovascular intervention in stroke. Unfortunately, I feel the fundamental limitations of their model invalidate its clinical utility.

“Selection of patients for intra-arterial treatment for acute ischaemic stroke: development and validation of a clinical decision tool in two randomised trials”

Discharged and Dropped Dead

The Emergency Department is a land of uncertainty. Generally a time-compressed, zero-continuity environment with limited resources, we frequently need to make relatively rapid decisions based on incomplete information. The goal, in general, is to treat and disposition patients in an advantageous fashion to prevent morbidity and mortality, while minimizing the costs and other harms.

The consequence of this confluence of factors leads, unfortunately, to a handful of patients who meet their unfortunate end following discharge. A Kaiser Permanente Emergency Department cohort analysis found 0.05% died within 7 days of discharge, and identified a few interesting risk factors regarding their outcomes. This new article, in the BMJ, describes the outcomes of a Medicare cohort following discharge – and finds both similarities and differences.

One notable difference, and a focus of the authors, is that 0.12% of patients discharged from the Emergency Department died within 7 days. This is a much larger proportion than the Kaiser cohort, however, the Medicare population is obviously a much older cohort with greater comorbidities. Then, they found similarities regarding the risks for death – most prominently, “altered mental status”. The full accounting of clinical features is described in the figure below:

Then, there were some system-level factors as well. Potentially, rural emergency departments and those with low annual volumes contributed in their multivariate model to increased risk of death. This data set is insufficient to draw any specific conclusions regarding these contributing factors, but it raises questions for future research. In general, however, this is interesting – and not terribly surprising data – even if it is hard to identify specific operational interventions based on these broad strokes.

“Early death after discharge from emergency departments: analysis of national US insurance claims data”

Insight Is Insufficient

In this depressing trial, we witness a disheartening truth – physicians won’t necessarily do better, even if they know they’re not doing well.

This study tested a mixed educational and peer comparison intervention on primary care physicians in Switzerland, with an end goal of improving antibiotic stewardship for common ambulatory complaints. The “worst-performing” 2,900 physicians with respect to antibiotic prescribing rates were enrolled and randomized to the study intervention or none. The study intervention consisted of materials regarding appropriate prescribing, along with personalized feedback regarding where their prescribing rate ranked compared to the entire national cohort. The core of their hypothesis involved whether just this passive knowledge regarding their peer performance would exert normalizing influence over their practice.

Unfortunately, despite providing these physicians with this insight, as well as tools for improvement, the net effect of their intervention was effectively zero. There were some observations regarding changes in prescribing rates for certain age groups, and for certain types of antibiotics, but dredging through these secondary outcomes leads to only unreliable conclusions.

This is not particularly surprising data. These sorts of passive feedback mechanisms unhitched from material consequences have never previously been shown to be effective. There are other, more effective mechanisms – focused education, decision-support interventions, and shared decision-making – but, for a fragmented, national health system, this represented a relatively inexpensive model to test.

Try again!

“Personalized Prescription Feedback Using Routinely Collected Data to Reduce Antibiotic Use in Primary Care”

Stumbling Around Risks and Benefits

Practicing clinicians contain multitudes: the vastness of critical medical knowledge applicable to the nearly infinite permutaions of individual patients.  However, lost in the shuffle is apparently a grasp of the basic fundamentals necessary for shared decision-making: the risks, benefits, and harms of many common treatments.

This simple research letter describes a survey distributed to a convenience sample of residents and attending physicians at two academic medical centers. Physicians were asked to estimate the incidence of a variety of effects from common treatments, both positive and negative. A sample question and result:

treatment effect estimates
The green responses are those which fell into the correct range for the question. As you can see, in these two questions, hardly any physician surveyed guessed correctly.  This same pattern is repeated for the remaining questions – involving peptic ulcer prevention, cancer screening, and bleeding complications on aspirin and anticoagulants.

Obviously, only a quarter of participants were attending physicians – though no gross differences in performance were observed between various levels of experience. Then, some of the ranges are narrow with small magnitudes of effect between the “correct” and “incorrect” answers. Regardless, however, the general conclusion of this survey – that we’re not well-equipped to communicate many of the most common treatment effects – is probably valid.

“Physician Understanding and Ability to Communicate Harms and Benefits of Common Medical Treatments”

Your New Career in “Waiting Room Medicine”

A few years back, a facetious advertisement in the Canadian Journal of Emergency Medicine promoted the availability of fellowship positions in “Waiting Room Medicine”, a comedic take on the struggles of the specialty to manage increasing patient volume with limited resources. While there are certainly Emergency Departments with ample space and “white glove”-type service – see the for-profit expansion of free-standing EDs in states like Texas – there are also publicly-funded and other EDs that struggle with physical bed space for patients for a variety of reasons.

This study attempts to quantify the effect of an intervention utilized by many overburdened or otherwise saturated EDs – starting the initial evaluation in triage with either provider-directed or protocolized orders. At UCLA/Olive-View, all patients presenting to an already-full ED received an initial rapid evaluation by an attending physician or nurse practitioner. During their 10-month study period, non-pregnant adults with abdominal pain were randomized to either receiving initial evaluation orders following this evaluation, or to be returned to the waiting room to await full evaluation at a later time pending bed availability.

There were 1,691 enrolled and randomized, with approximately 10% excluded from analysis mostly because they left the ED before their evaluation was complete. Overall, the initiation of the work-up in triage saved patients approximately a half-hour, on average, of bedded time in the ED. This was reflected by a similar absolute decrease in overall ED length-of-stay. There were a couple other interesting tidbits unique to their execution:

  • The most profound difference associated with WR medicine was simply blood and urine testing. While imaging could be ordered up front, it was rarely done.
  • Some of the advantages related to the WR blood testing were minimized by ~13% of patients receiving further testing after being bedded in the ED.
  • Patients randomized to WR medicine received, on average, a greater number of diagnostics per patient, probably representing resource waste.

So – yes, this probably accurately reflects the impact of orders placed in triage: some wasted resources based on the initial, incomplete evaluation, with a trade-off of potential time savings. The extent to which your system might benefit from a similar set-up is probably related to your level of chronic bed scarcity.

“Initiating Diagnostic Studies on Patients With Abdominal Pain in the Waiting Room Decreases Time Spent in an Emergency Department Bed: A Randomized Controlled Trial”

The Downside of Antibiotic Stewardship

There are many advantages to curtailing antibiotic prescribing. Costs are reduced, fewer antibiotic-resistant bacteria are induced, and treatment-associated adverse events are eliminated.

This retrospective, population-based study, however, illuminates the potential drawbacks. Using electronic record review spanning 10 years of general practice encounters, these authors compared infectious complication rates between practices with low and high antibiotic prescribing rates. Spanning 45.5 million person-years of follow-up after office visits for respiratory tract infections, there is both reason for reassurance and reason for further concern.

On the “pro” side, cases of mastoiditis, empyema, bacterial meningitis, intracranial abscess and Lemierre’s syndrome were no different between those who prescribed high rates (>58%) and those with low rates (<44%). However, there is a reasonably clear linear relationship with excess follow-up encounters for both pneumonia and peritonsilar abscess. Incidence rate ratios were 0.70 compared with reference for pneumonia and 0.78 compared with reference for peritonsillar abscess. However, the absolute differences can best be described as “large handful” and “small handful” of extra cases per 100,000 encounters

There are many rough edges and flaws relating to these data, some of which are probably adequately defeated by the massive cohort size. I think it is reasonable to interpret this article as accurately reflecting true harms from antibiotic stewardship. More work should absolutely be pursued in terms of strategies to mitigate these potential downstream complications, but I believe the balance of benefits and harms still falls on the side of continued efforts in stewardship.

“Safety of reduced antibiotic prescribing for self limiting respiratory tract infections in primary care: cohort study using electronic health records”

The “IV Antibiotics” Sham

Among the many overused tropes in medicine is the myth of the supremacy of intravenous antibiotics.  In the appropriate clinical context, it’s just a waste.

This is a retrospective analysis of 36,405 patients hospitalized for community-acquired pneumonia, and for whom a fluoroquinolone was selected as therapy.  The vast majority – 94% – received an intravenous dose, while the remaining 2,205 (6%) were treated orally.  Unadjusted mortality favored the oral dose – unsurprisingly, as those patients also generally has fewer comorbid conditions.  In their multivariate, propensity-matched analysis, there was no difference in mortality, intensive care unit escalation, or mechanical ventilation.

These results are wholly unsurprising, and the key feature is the class of antibiotic involved.  Commonly used antibiotics in the fluoroquinolone class, trimethoprim-sulfamethoxazole, metronidazole, and clindamycin, among others, have excellent oral absorption.  I have seen many a referral to the Emergency Department for “intravenous antibiotics” prior to an anticipated discharge to home therapy when any one of these choices could have obviated the entire encounter.

“Association Between Initial Route of Fluoroquinolone Administration and Outcomes in Patients Hospitalized for Community-acquired Pneumonia”

Pan-Scans Don’t Save Lives

Humans are fallible.  We don’t always make good choices, and our patients – bless their hearts – can sometimes be time bombs wrapped in meat.  Logically, then, as many trauma services have concluded, the solution is to eliminate the weak link: don’t let the human chose which parts of the body to scan – just scan it all.

This is REACT-2, a randomised [sic] trial evaluating precisely the limits to human judgment in a resource-utilization versus immediacy context.  In this multi-center trial, adult trauma patients wth suspected serious injury were randomized to either imaging guided by clinical evaluation or total-body CT.  The primary outcome was in-hospital mortality, with secondary outcomes relating to timeliness of diagnosis, to mortality in other time frames, morbidity, and costs.

This was a massive undertaking, with 1,403 patients randomly assigned to one of the arms, with ~540 in each arm successfully allocated and included in their primary analysis.  Each cohort was well-matched on baseline characteristics, including all physiologic markers, although the Triage Revised Trauma Score was slightly lower (worse) for the total-body CT group.  The results, in most concise form, weakly favor selective scanning.  There was no difference in mortality nor complications nor length-of-stay nor virtually any reliable secondary outcome.  Costs, as measured in European terms, were no different, despite the few scans obviated.  Time-to-diagnosis was slightly faster in the total-body CT group, owing to skipping initial conventional radiography, while radiation exposure was slightly lower in the selective scanning group.

In some respects, it is not surprising there were no differences found – as CT was still frequently utilized in the selective CT cohort, including nearly half that ultimately underwent total-body CT.  There were some differences noted in in-hospital Injury Severity Score between groups, and I agree with Rory Spiegel’s assertion this is probably an artifact of the routine total-body CT.  This study can be used to justify either strategy, however – with selective CT proponents focusing on the lack of differences in patient-oriented outcomes, and total-body CT proponents noting minimal resource and radiation savings at the expense of timeliness.

“Immediate total-body CT scanning versus conventional imaging and selective CT scanning in patients with severe trauma (REACT-2): a randomised controlled trial”