The Magic of Telestroke

The use of telestroke assessment is sweeping the nation, like a meme, it’s viral, it won’t get out of your head.

It is, understandably, difficult to staff 24-hour neurology support with the responsiveness required by the quality guidelines for the evaluation of acute stroke. Likewise, it is difficult to standardize care across all providers in the Emergency Department – giving further fits to those administrators engaged with guideline compliance and certification.

So, remote assessment via telestroke.

This is a brief before-and-after report regarding the use of telestroke at the 21 hospitals in the Kaiser Northern California region. This was rolled out over the course of 2015-16, and compared with a seasonally-adjusted 9-month period for each hospital. As with any before-and-after study, there are always unmeasured confounders impacting care and processes, but these authors presented a few findings:

  • Daily “stroke alerts” in the system increased from 8.8 per day to 11.7 per day.
  • The rate of alteplase administration increased from 13.1% to 17.6%.
  • The rate of stroke mimics receiving alteplase increased from 3.9% to 6.8%.
  • The rate of symptomatic intracranial hemorrhage increased from 2.2% to 3.8%.
  • Door-to-needle time decreased from a mean of 63.2 minutes to 41.8 minutes.

Is telestroke responsible for all these “improvements”? Again, with all the other various potential process initiatives, it’s impossible to say for certain. What is apparent, however, is that this vertion of faster is not obviously better – treatment of greater numbers of mimics, along with an increase in bleed rate – is not obviously higher quality care.  Whether this, as well, can be blamed solely on telestroke is likewise a reasonable question not specifically answered here.

“Novel Telestroke Program Improves Thrombolysis for Acute Stroke Across 21 Hospitals of an Integrated Healthcare System”
http://stroke.ahajournals.org/content/early/2017/12/14/STROKEAHA.117.018413

Who Still Acutely Uses Fecal Occult Blood Tests?

If you trained or practiced in the last few decades, there’s no doubt you’ve performed hundreds, if not thousands, if not tens of thousands, of fecal occult blood tests. For many years, this has been part of some routine evaluations for suspected gastrointestinal bleeding or anemia without another adequately identified source.

However, this test is pointless, as these folks at Parkland succinctly illustrate. In evaluating the value of the FOBT in the acute clinical setting, they observe two features obviating its utility. First, they argue the test characteristics are utterly inadequate – there are confounders contributing to both false negatives and false positives, leading to either delays or inappropriate interventions. Then, they ultimately note ultimate clinical course depends on the the other presenting features rather than the result of the FOBT.

Specifically, Parkland went from nearly 8,000 FOBT (mostly in the Emergency Department) to zero.

You can too.

“Eliminating in-Hospital Fecal Occult Blood Testing: Our Experience with Disinvestment”

https://www.sciencedirect.com/science/article/pii/S0002934318302195

Using PERC & Sending Home Pulmonary Emboli For Fun and Profit

The Pulmonary Embolism Rule-Out Criteria have been both lauded and maligned, depending on which day the literature is perused. There are case reports of large emboli in patients who are PERC-negative, as well as reports of PE prevalence as high as 5% – in contrast to its derivation meeting the stated point of equipoise at <1.8%. So, the goal here is to be the prospective trial to end all trials and most accurately describe the impact of PERC on practice and outcomes.

This is a cluster-randomized trial across 14 Emergency Departments across France.  Centers were randomized to either a PERC-based work-up strategy for PE, or “conventional” in which virtually every patient considered for PE was tested using D-dimer. Interestingly, these 14 centers also crossed-over to the alternative algorithm approximately halfway through the study period, so every ED was exposed to both interventions – some of which used PERC first, and vice versa.

Overall, they recruited 1,916 patients across the two enrollment periods, and these authors focused on the 1,749 who received per-protocol testing and were not lost to follow-up. The primary outcome was any new diagnosis of venous thromboembolism at 3 month follow-up.  This was their measure of, essentially, clinically important missed VTE upon exiting their algorithm. The headline results here were, in their per-protocol population, that 1 patient was diagnosed with VTE in follow-up in the PERC group compared with none in the control cohort. This met their criteria for non-inferiority, and, just at face value, the PERC-based strategy is clearly reasonable. There were 48 patients lost to follow-up, however, but given the overall prevalence of PE in this population, it is unlikely these lost patients would have affected the overall results.

There are a few interesting bits to work through from the characteristics of the study cohort. The vast majority of patients considered for the diagnosis of PE were “low risk” by either Wells or simplified Revised Geneva Score. However, 91% of those in the PERC cohorts were “low risk”, as compared to 78% in the control cohort – which, considering the structure of this trial, seems unlikely to have occurred by chance alone. In the PERC cohort, about half failed to meet PERC and these patients – plus a few protocol violations – moved forward with D-dimer testing. In the conventional cohort, 99% were tested with D-dimer in accordance with their algorithm.

There were then, again, more odd descriptive results at this point.  The results of the D-dimer testing (≥0.5 µg/mL) were positive in 343 of the PERC cohort and 471 of the controls. However, physicians only moved forward with CTPA in 38% of the PERC cohort and 46% of the conventional cohort.  It is left entirely unaddressed why patients entered a PE rule-out pathway and ultimately never received a definitive imaging test after a D-dimer above threshold. For what it’s worth, then, the fewer patients undergoing evaluation for PE in the PERC cohort led to fewer diagnoses of PE, fewer downstream hospital admissions and anticoagulants, and their ED length of stay was shorter. The absolute numbers are small, but patients in the control cohort undergoing CTPA were more likely to have subsegmental PEs (5 vs. 1), which, again, ought to generally make sense.

So, finally, what is the takeaway here? Should you use a PERC-based strategy? As usual, the answer is: it depends. Firstly, it is almost certainly the case the PERC-based algorithm is safe to use. Then, if your current approach is to carpet bomb everyone with D-dimer and act upon it, yes, you may see dramatic improvements in ED processes and resource utilization. However, as we see here, the prevalence of PE is so low, strict adherence to a PERC-based algorithm is still too clinically conservative. Many elevated D-dimers did not undergo CTPA in this study – and, with three month follow-up, they obviously did fine. Frankly, given the shifting gestalt relating to the work-up of PE, the best cut-off is probably not PERC, but simply stopping the work-up of most patients not intermediate- or high-risk.

“Effect of the Pulmonary Embolism Rule-Out Criteria on Subsequent Thromboembolic Events Among Low-Risk Emergency Department Patients: The PROPER Randomized Clinical Trial”
https://jamanetwork.com/journals/jama/fullarticle/2672630

TACO Time!

One of the best acronyms in medicine: TACO. Of no solace to those afflicted by it, transfusion-related circulatory overload is one of the least-explicitly recognized complications of blood product transfusion. The consent for blood products typically focuses on the rare transmissibility of viruses and occurrence of autoimmune reactions, yet TACO is far more frequent.

This report from an ongoing transfusion surveillance study catalogued 20,845 patients receiving transfusions of 128,263 blood components. The incidence of TACO was one case per 100 transfused patients. Then, these authors identified 200 patients suffering TACO, and compared their baseline characteristics to 405 patients receiving similar transfusion intensity, but who did not develop TACO. Clinically relevant risk factors for developing TACO identified in their analysis were, essentially:

  • Congestive heart failure
  • End-stage or acute renal disease
  • End-stage liver disease
  • Need for emergency surgery

… or, basically, the population for whom a propensity for circulatory overload would be expected. It appears, generally speaking, clinicians were aware of the increased risks in these specific patients, as a greater percentage received diuretic treatment prior to transfusion as well. 30-day mortality in those suffering TACO was approximately 20%, roughly double that of those matched controls.

More good reasons to adhere to as many restrictive transfusion guidelines as feasible.

“Contemporary Risk Factors and Outcomes of Transfusion-Associated Circulatory Overload”

https://www.ncbi.nlm.nih.gov/pubmed/29300236

The Best Antibiotic Stewardship Money Can Buy

Believe it, or not:

Use of procalcitonin to guide antibiotic treatment in patients with acute respiratory infections reduces antibiotic exposure and side-effects, and improves survival. Widespread implementation of procalcitonin protocols in patients with acute respiratory infections thus has the potential to improve antibiotic management with positive effects on clinical outcomes and on the current threat of increasing antibiotic multiresistance.

So, should we all be jumping on the procalcitonin bandwagon? Chances are, you probably already have – check with your critical care team, and I expect you’ll find some implementation of a procalcitonin-based protocol supporting antibiotic stewardship. The underlying concept is hardly unreasonable – when sensitive markers of bacterial infection are low, antibiotics can be discontinued.

However, the evidence base – as helpfully pooled in this individual-patient meta-analysis – is nothing more than a carefully orchestrated disinformation campaign by the manufacturers of these assays. Roche, Thermo-Fisher and bioMérieux have an obvious vested business interest in publishing favorable research findings in support of procalcitonin-based treatment algorithms, and it should come as no surprise the authors have a couple items to declare:

PS, MC-C, and BM have received support from Thermo-Fisher and bioMérieux to attend meetings and fulfilled speaking engagements. BM has served as a consultant for and received research support from Thermo-Fisher. HCB and MB have received research support from Thermo-Fisher for a previous meta-analysis regarding procalcitonin. DWdL’s hospital received financial support for the randomisation tool by ThermoFisher. DS, OB, and MT have received research support from Thermo-Fisher. TW and SS have received lecture fees and research support from Thermo-Fisher. CEL has received lecture fees from Brahms and Merck Sharp & Dohme-Chibret. JC has received consulting and lecture fees from P zer, Brahms, Wyeth, Johnson & Johnson, Nektar-Bayer, and Arpida. MW has received consulting and lectures fees from Merck Sharp & Dohme-Chibret, Janssen Cilag, Gilead, Astellas, Sano , and Thermo-Fisher. FT’s institution received funds from Brahms. CC has received an unrestricted grant of €2000 from Thermo-Fisher Scientific, and non-fiancial support from bioMérieux for the ProToCOLD study. YS has received unrestricted research grants from Thermo-Fisher, bioMérieux, Orion Pharma, and Pfizer. ARF has served on advisory boards for Novavax, Hologic, Gilead, and MedImmune; and has received research funding from AstraZeneca, Sanofi Pasteur, GlaxoSmithKline, and ADMA Biologics. J-USJ declares that he was invited to the European Respiratory Society meeting 2016 by Roche Pharmaceuticals.

And, it’s clearly no coincidence most of the 26 trials included in this systematic review are authored by those same financially-supported authors above – so, it’s turtles all the way down for this meta-analysis.

The results, then, for what they’re worth, despite all the concerted effort to spin them, are rather bland. The mortality differences are zero in the outpatient settings, and small enough in the intensive care unit side to potentially be skewed by design. The only signal I might ascribe reliable in these data is: procalcitonin does reduce antibiotic exposures. This manifests in practice in two different fashions, depending on the setting. In the outpatient setting, where virtually all the antibiotics are unnecessary (one of these trials enrolled patients with “bronchitis”!), it gives the clinicians a crutch to fall back upon to prevent them from practicing bad medicine.  In the intensive care unit, it helps titrate the use of broad-spectrum intravenous antibiotics, which is likely to reduce a number of important downstream effects.  I don’t object to the latter application, but my recommendation for the former: just don’t practice bad medicine in the first place (easier said than done, sadly).

So, the takeaway I’d like to promote in the context of this article – and its simultaneously published Cochrane Review by the same, COI-infested authors – is skepticism regarding the effect sizes for procalcitonin-guided therapy. These data do not exclude its clinical utility for the stated purposes, but its use ought be considered in the narrowest of clinical situations, and probably in those at the highest-risk for harms from otherwise clinically confounded antibiotic exposures.

“Effect of procalcitonin-guided antibiotic treatment on mortality in acute respiratory infections: a patient level meta-analysis”
https://www.ncbi.nlm.nih.gov/pubmed/29037960

Also, if you’re persistent enough to scroll to page 126 in the Cochrane Review full text, you glean this lovely pearl:
Philipp Schuetz received support (paid to his employer) from Thermo Fisher, Roche Diagnostics, Abbott and bioMerieux to attend meetings and fulfil speaking engagements. These conflicts breach Cochrane’s Commercial Sponsorship Policy (Clause 3), therefore Philipp Schuetz will step down as lead author at the next update of the review. Dr Schuetz’s declared conflicts were referred to the Funding Arbiter Panel and Cochrane’s Deputy Editor-in-Chief who have agreed this course of action but as an exception which does not set a precedent for similar situations in the future.

The Top “Overuse” of 2016

Another entry in JAMA Internal Medicine’s lovely “Less is More” series, this is a “systematic review” of the previous year’s literature regarding potentially unnecessary care. Living here in the asylum, it seems all our fellow inmates and I are consigned to issuing weather reports from the tempest – but, hey, baby steps.

Their “systematic review” is not particularly rigorous.  It’s basically a literature search, followed by a subjective distillation by author consensus to those considered to be the most potentially impactful – but, regardless, their list is worth reviewing. Without further ado, the highlights of their ten selections:

  • Transesophageal echocardiography is more informative than transthoracic in illuminating the etiology of a stroke, but the additive information does not have a clear downstream benefit on outcomes.
  • Patients undergoing computed tomography to rule out pulmonary embolism without algorithm-compliant use of D-dimer suffer from overuse and low-value testing.
  • CT use increased in all Emergency Department patients with respiratory symptoms, with no evidence of downstream change in prescribing, hospital admission, or mortality.
  • Supplemental oxygen does not demonstrate benefit in patients with chronic obstructive pulmonary disease and mild exertional hypoxia.
  • Small improvements in antibiotic prescribing were seen when comparisons to peers were performed.
  • A shared decision-making implementation for Emergency Department patients with chest pain increased patient engagement and demonstrated a secondary effect of diminished admission and cardiac testing.

Wizard.

“2017 Update on Medical Overuse: A Systematic Review”
https://www.ncbi.nlm.nih.gov/pubmed/28973402

Are We Killing People With 30-Day Readmission Targets?

Ever since the Center for Medicare and Medicaid Services announced their intention to penalize hospitals for early readmissions, folks have been worrying about the obvious consequences: would a focus on avoidance place patients at risk? Would patients best served in the hospital be pushed into other settings for suboptimal care?

That is the argument made in this short piece in the Journal of the American College of Cardiology. They look backwards at the last two decades of heart failure readmissions and short-term mortality, and take issue with the fundamental underlying premise of the quality measure, the inequities associated with the measure, and potential unintended harms. Their most illustrative example: when patients die outside the hospital within 30-days, paradoxically, they contribute to apparent improved performance in healthcare quality, as measured by 30-day readmission.

They back up their point by using the aggregate data analyzing readmissions between 2008 and 2014, published previously in JAMA, and focusing primarily on the heart failure component. In the original JAMA analysis, the evaluation paired individual hospital monthly readmission and risk-adjusted mortality, and were unable to identify an increased risk of death relating to reductions in 30-day readmissions. These authors say: too much tree, not enough forest. In the decade prior to announcements of 30-day readmission penalties, 30-day heart failure mortality had dropped 16.2%, but over the analysis period, 30-day heart failure mortality was back on the rise. In 2008 the 30-day mortality was 7.9% and by 2014 it was up to 9.2%, a 16.5% increase, and an even larger increase relative to the pre-study trend with decreasing mortality.

These are obviously two very different ways of looking at the same data, but the implication is fair: those charged with developing a quality measure should be able to conclusively demonstrate its effectiveness and safety. If any method of analysis raises concerns regarding the accepted balance of value and harm, the measure should be placed on a probationary status while rigorous re-evaluation proceeds.

“The Hospital Readmission Reduction Program Is Associated With Fewer Readmissions, More Deaths”
http://www.sciencedirect.com/science/article/pii/S0735109717393610

When Aggressive Sepsis Treatment Kills

Much has been made of efforts to detect and treat sepsis as early as possible after presentation, with many post-hoc analyses seeming to demonstrate time-sensitive mortality benefits associated with receiving the various components included in our “quality” measures. However, just like early goal-directed therapy back in the day, it has never truly become clear which element of early sepsis care confers the survival benefit. Absent specific data regarding how to best tailor therapy to the individual patient, we simply bludgeon everyone with the same sepsis bundle.

And, as we see here, that generalization is likely harmful.

This is a small randomized trial from Zambia – not to be confused with Nambia – in which 212 patients presenting with suspected infection, at least 2 systemic inflammatory response syndrome criteria, and hypotension were randomized to an early resuscitation protocol or “usual care”. The early resuscitation protocol sounds, generally speaking, similar to our modern approach to sepsis care – early intravenous fluid boluses with frequent intravascular volume assessment, vasopressors, and transfusions for severe anemia. The usual care cohort received, essentially, the same, only less so.

The two groups enrolled were roughly equivalent – but nothing like our sepsis cohort here in the United States. Approximately 90% of the patients enrolled were positive for the human immunodeficiency virus, half of the positive culture results were tuberculosis, the mean hemoglobin was 7.8 g/dL, and the majority of patients had been bedridden for over a week prior to presentation. Intravascular volume status and response to resuscitation was assessed by evaluating jugular venous pressure, tachypnea, and peripheral oxygenation. Finally, unlike many modern settings, most patients received dopamine as their vasopressor of choice and were admitted to general medical wards rather than intensive care units. In short, these are patients and care settings unlike in most industrial nations.

Mortality, not unexpectedly, was high – but it was much higher in those randomized to the early resuscitation cohort. Mortality in those receiving early resuscitation was 48.1%, compared with 33.0% in those receiving usual care. This mortality different then persisted out to 28 days, with over 60% mortality in the early resuscitation group at that time, compared with just over 40% in usual care.

This trial does not necessarily call into question the general principles of modern sepsis care, but it certainly provides a couple valuable lessons. The first, and most obvious, is the cautionary tale regarding generalizing research findings from one setting to another. Even a reasonably important, beneficial effect size can be transformed into a greater magnitude of harm if applied in another clinical setting. Then, this should clearly make us re-examine our current approach to sepsis care to ensure there is not a subgroup for whom early resuscitation is, in fact, the wrong answer. Our blind pursuit of checking boxes for quality measures, while generating an overall beneficial effect, is probably resulting in waste and harms for a substantial subgroup of those presenting with sepsis and septic shock.

“Effect of an Early Resuscitation Protocol on In-hospital Mortality Among Adults With Sepsis and Hypotension”
http://jamanetwork.com/journals/jama/fullarticle/2654854

More Futility: Apneic Oxygenation?

Here’s another pendulum swing to throw into the gears of medicine – an apparent failure of apneic oxygenation to prevent hypoxemia during intubation in the Emergency Department. Apneic oxygenation – passive oxygenation during periods of periprocedural apnea – seems reasonable in theory, and several observational studies support its use. However, in a randomized, controlled ICU setting – the FELLOW trial – no difference in hypoxemia was detected.

This is the ENDAO trial, in which patients were randomized during ED intubation, with a primary outcome of mean lowest oxygen saturation during or immediately following. These authors prospectively enrolled 206 patients of 262 possible candidates, with 100 in each group ultimately qualifying for their analysis. The two groups were similar with regard to initial oxygen levels, pre-oxygenation levels, and apnea time. Then, regardless of their statistical power calculations and methods, it is fairly clear at basic inspection their outcomes are virtually identical – in mean hypoxemia, SpO2 below 90%, SpO2 below 80%, or with regard to short-term or in-hospital mortality. In the setting in which this trial was performed, there is no evidence to suggest a benefit to apneic oxygenation.

It is reasonable to note all patients included in this study required a pre-oxygenation period of 3 minutes by 100% FiO2 – and that oxygen could be delivered by bag-vale mask, BIPAP, or non-rebreather with flush rate oxygen. These are not necessarily equivalent methods of pre-oxygenation, but, at the least, the techniques were not different between groups (>80% NRB). It is reasonable to suggest passive oxygenation may be more beneficial in those without an adequate pre-oxygenation period, but it would certain be difficult to prospectively test and difficult to anticipate a clinically important effect size.

Adding complexity to any procedure – whether with additional monitoring and alarms or interventions of limited efficacy – adds to the cognitive burden of the healthcare team, and probably has deleterious effects on the most critical aspects of the procedure. It is not clear that apneic oxygenation reliably improves patient-oriented outcomes, and does not represent a mandatory element of rapid-sequence intubation.

“EmergeNcy Department use of Apneic Oxygenation versus usual care during rapid sequence intubation: A randomized controlled trial”
http://onlinelibrary.wiley.com/doi/10.1111/acem.13274/full

Even the Best EHR Still Causes Pain

Paper is gone; there’s no going back. We’re all on electronic health record systems (cough, Epic) now, with all the corresponding frustrations and inefficiencies. Some have said, however, the blame lay not with the computer – but with the corporate giant whose leviathan was not designed to meet the needs of physicians in the Emergency Department, but rather support the larger hospital and primary-care enterprise. Unfortunately, as we see here, even a “custom” design doesn’t solve all the issues.

These authors report on their experience with their own homegrown system, eDoc, designed to replace their paper system, and built using feedback from health technology experts and their own emergency medicine clinicians. Their hypothesis, in this case, was that throughput would be maintained – ED length-of-stay as a proxy for operational efficiency. The interrupted time series analyses performed before-and-after the transition are rather messy, with various approaches and adjustments, including “coarsened exact matching”, but the outcome is consistent across all their models: the computer made things worse. The estimated difference per patient is small: about 6 additional minutes, but, as the authors note, in a mid-size ED handling about 165 patients a day, this adds 16 hours of additional boarding time – or the effect of shrinking your ED in size by up to 2/3rds of a room.

It is probably erroneous to simply blame “computers” as the culprit for our woes. Rather, it is the computer-as-vehicle for other onerous documentation requirements and regulatory flaming hoops. If the core function of the EHR were solely to meet the information and workflow needs of physicians, rather than the entire Christmas buffet of modern administrative and billing workflow, it is reasonable to expect a moderation in the level of suffering.

But, I think that ship has sailed.

“A Custom-Developed Emergency Department Provider Electronic Documentation System Reduces Operational Efficiency”

https://www.ncbi.nlm.nih.gov/pubmed/28712608