No Pictures of Poop Needed

I like this article – not because of any specific quality improvement reason relating to their intervention, but because it reminded me of something of which I perform too many.

It’s an easy trap to fall into, the – “well, let’s just see how much poop is in there” for diagnostic reassurance and to help persuade the family you’re doing relevant testing in the Emergency Department. However, here are the relevant passages from their introduction:

In a 2014 clinical guideline, the North American and European Societies of Pediatric Gastroenterology, Hepatology, and Nutrition found that the evidence supports not performing an AXR to diagnose functional constipation.

and

Recent studies showed that AXRs performed in the ED for constipation resulted in increased return visits to the ED for the same problem.

I feel some solace in knowing that 50 to 70% of ED visits for constipation may include an abdominal radiograph as part of their workflow – meaning I’m just, at least, part of the herd.

So, regardless of the point of their article – that a plan-do-act cycle of education and provider feedback successfully cut their rate of radiography from 60% to 20% – this is yet another misleading and/or unnecessary test to delete from our practice routine.

“Reducing Unnecessary Imaging for Patients With Constipation in the Pediatric Emergency Department.”
https://www.ncbi.nlm.nih.gov/pubmed/28615355

The FAST Is Wrong, Bob

What happens when you routinely do an unnecessary test that rarely changes management? Essentially, nothing.

So, here is a randomized, controlled trial demonstrating precisely that.

This trial looks at the Focused Assessment with Sonography in Trauma exam, as performed in pediatric blunt trauma patients. The FAST, if you recall, is generally indicated primarily for hypotensive blunt trauma patients – that is, it has supplanted diagnostic peritoneal lavage as a non-invasive alternative. It does not routinely provide a diagnosis, but it helps guide initial management and may triage a patient to emergency laparotomy rather than resuscitation and further testing.  Therefore, in a stable pediatric trauma patient, the pretest likelihood of a significant finding – free fluid relating to hemorrhage from trauma – is quite low. Furthermore, because many significant intra-abdominal injuries to solid and hollow organs are missed by ultrasound, a negative FAST has poor negative likelihood ratios and should not substantial affect decisions for advanced imaging as otherwise clinically indicated.

So, then, this trial is a bit of an odd duck with respect to any expected difference observed – and that’s precisely what they found in their “coprimary outcomes”. Among the 925 patients randomized to trauma team assessment alone or trauma team assessment supplemented by Emergency Physician FAST, there was no significant difference in imaging, Emergency Department length of stay, missed intra-abdominal injuries, or total hospital charges. The authors hypothesized, based on adult data, there might be savings at least in ED LOS – though, I might rather suggest adding in one more non-diagnostic test to the acute evaluation is more likely to mildly prolong LOS.

There are also issues generalizing this study setting, where ~53% of patients in each cohort received CTs, to other institutions. Interestingly, mean time to CT was over 2 1/2 hours, suggesting a great deal of observation and reassessment drove imaging decisions rather than the initial evaluation. Then, after expert review, EPs incorrectly identified a positive FAST in 10 out of 23 cases – and missed 11 true positives, as well.  The FAST, even at this academic medical center where it is done as routine, cannot be relied upon.

The sum of this evidence is: no change in practice. A stable patient is, by definition, stable for imaging as indicated – and the FAST is an unnecessary part of the initial clinical evaluation.

“Effect of Abdominal Ultrasound on Clinical Care, Outcomes, and Resource Use Among Children With Blunt Torso Trauma”

http://jamanetwork.com/journals/jama/article-abstract/2631528

Icatibant … Can’t?

In a small, problematic, Phase 2 trial, icatibant – a selective bradykinin B2 receptor antagonist – seemed promisingly efficacious for the treatment of angiotensin-converting enzyme inhibitor-induced angioedema. Considering the catastrophic and potentially fatal complications relating to airway angioedema, the prospect of having an effective rescue medication is of substantial clinical importance.

Sadly, and first picked up by Bryan Hayes, the phase 3 trial was a wash. Published with great fanfare in the Journal of Allergy and Clinical Immunology: In Practice, this multi-center study enrolled 121 patients with presumed, and at least moderately severe, ACE-I-induced angioedema. The primary efficacy endpoint was the subjective “time to meeting discharge criteria”, which was guided by a scoring system consisting of difficulty breathing, difficulty swallowing, voice change, and tongue swelling. Secondary endpoints included time to onset of symptom relief, rescue therapy, and other safety considerations.

Almost all patients received some “conventional” therapy prior to randomization, with most (>80%) receiving antihistamines or corticosteroids and approximately one-fifth receiving epinephrine. The median time to doses of conventional therapy were ~3.5 hours, and enrolled patients received either icatibant or placebo ~3.3 hours afterwards.

The picture is worth all the words:

No difference.

Laudably – although this ought to be the default, without special recognition – the sponsor and these COI-afflicted authors unabashedly published these neutral findings with little sugarcoating. I will defer, then, to their closing sentence:

In conclusion, icatibant was no more effective than placebo in treating at least moderately severe ACE-Ieinduced angioedema in this phase III trial.

“Randomized Trial of Icatibant for Angiotensin-Converting Enzyme Inhibitor Induced Upper Airway Angioedema”
http://www.sciencedirect.com/science/article/pii/S2213219817301721

What Does a Sepsis Alert Gain You?

The Electronic Health Record is no longer simply that – a recording of events and clinical documentation.  Decision-support has, for good or ill, morphed it into a digital nanny vehicle for all manner of burdensome nagging.  Many systems have implemented a “sepsis alert”, typically based off vital signs collected at initial assessment. The very reasonable goal is early detection of sepsis, and early initiation of appropriately directed therapy. The downside, unfortunately, is such alerts are rarely true positives for severe sepsis in broadest sense – alerts far outnumber the instances in a change of clinical practice results in a change in outcome.

So, what to make of this:

This study describes a before-and-after performance of a quality improvement intervention to reduce missed diagnoses of sepsis, part of which was introduction of a triage-based EHR alert. These alerts fired during initial assessment based on abnormal vital signs and the presence of high-risk features. The article describes baseline characteristics for a pre-intervention phase of 86,037 Emergency Department visits, and then a post-intervention phase of 96,472 visits. During the post-intervention phase, there were 1,112 electronic sepsis alerts, 265 of which resulted in initiation of sepsis protocol after attending physician consultation.  The authors, generally, report fewer missed or delayed diagnoses during the post-intervention period.

But, the evidence underpinning conclusions from these data – as relating to improvements in clinical care or outcomes, or even the magnitude of process improvement highlighted in the tweet above – is fraught. The alert here is reported as having a sensitivity of 86.2%, and routine clinical practice picked up nearly all of the remaining cases that were alert negative.  The combined sensitivity is reported to be 99.4%.  Then, the specificity appears to be excellent, at 99.1% – but, for such an infrequent diagnosis, even using their most generous classification for true positives, the false alerts outnumbered the true alerts nearly 3 to 1.

And, that classification scheme is the crux of determining the value of this approach. The primary outcome was defined as either treatment on the ED sepsis protocol or pediatric ICU care for sepsis. Clearly, part of the primary outcome is directly contaminated by the intervention – an alert encouraging use of a protocol will increase initiation, regardless of appropriateness. This will not impact sensitivity, but will effectively increase specificity and directly inflate PPV.

This led, importantly, for the authors to include a sensitivity analysis looking at their primary outcome. This analysis looks at the differences in overall performance if stricter rules for a primary outcome might be entertained. These analyses evaluate the predictive value of the protocol if true positives are restricted to those eventually requiring vasoactive agents or pediatric ICU care – and, unsurprisingly, even this small decline in specificity results in dramatic drops in PPV – down to 2.4% for the alert alone.

This number better matches the face validity we’re most familiar with for these simplistic alerts – the vast majority triggered have no chance of impacting clinical care and improving outcomes. It should further be recognized the effect size of early recognition and intervention for sepsis is real, but quite small – and becomes even smaller when the definition broadens to cases of lower severity. With nearly 100,000 ED visits in both the pre-intervention and post-intervention periods, there is no detectable effect on ICU admission or mortality. Finally, the authors focus on their “hit rate” of 1:4 in their discussion – but, I think it is more likely the number of alerts fired for each each case of reduced morbidity or mortality is on the order of hundreds, or possibly thousands.

Ultimately, the reported and publicized magnitude of the improvement in clinical practice likely represents more smoke and mirrors than objective improvements in patient outcomes, and in the zero-sum game of ED time and resources, these sorts of alerts and protocols may represent important subtractions from the care of other patients.

“Improving Recognition of Pediatric Severe Sepsis in the Emergency Department: Contributions of a Vital Sign–Based Electronic Alert and Bedside Clinician Identification”

http://www.annemergmed.com/article/S0196-0644(17)30315-3/abstract

PCCs for Non-Warfarin ICH?

This quick post comes to you from the EMedHome weekly clinical pearl, which was forwarded along to me with a “Good stuff?” open-ended question.

The “good stuff” referred to a series of articles discussing the “CTA spot sign”, referring to a radiologic marker of ongoing extravasation of blood following an intracranial hemorrhage. As logically follows, ongoing bleeding into a closed space has been associated with relatively increased hematoma growth and poorer clinical outcomes.

However, the post also highlighted – more in an informational sense – an article highlighting potential use of prothrombin concentrate complexes for treatment of bleeding, regardless of anticoagulation status. We are all obviously familiar with their use in warfarin-related and factor Xa-associated ICH, but this article endeavors to promote a hypothesis for PCC use in the presence of any ICH with ongoing radiologically apparent bleeding.

The evidence produced to support their hypothesis? A retrospective 8 patient cohort of patients with ICH and CTA spot sign, half of whom received PCCs and half who did not. Given the obvious limitations regarding this level of evidence, along with problems of face validity, there is no reason to revisit their results. The EMedHome pearl seemed to suggest we ought to be aware of this therapy in case a specialist consultant requested it. Now, you are aware – expensive, unproven, and not indicated without a substantially greater level of evidence to support its use.

“Role of prothrombin complex concentrate (PCC) in Acute Intracerebral Hemorrhage with Positive CTA spot sign: An institutional experience at a regional and state designated stroke center”
https://www.ncbi.nlm.nih.gov/pubmed/27915393

Angiotensin II for Refractory Shock

If you blockade the angiotensin receptor system, you have a treatment for hypertension. If you agonize that same system, it logically follows you may have a corresponding treatment for hypotension. So, this is ATHOS-3, a phase 3 trial of synthetic human angiotensin II infusion in patients with catecholamine-resistant shock.

Roughly speaking, this is a trial evaluating the effectiveness of angiotensin for improving hemodynamic parameters in adult patients in vasodilatory shock – defined by the trialists as based on sufficient cardiac index, intravascular volume measurements, and persistent hypotension. Enrolled patients also needed to display ongoing hemodynamic derangement despite “high-dose vasopressors”. Exclusion criteria abound. The primary outcome was achievement of mean arterial pressure targets at 3 hours after initiation of angiotensin or placebo infusion.

Over the ~1.5 year study period, 404 patients were screened to ultimately initiate study protocol in 321. There’s little ambiguity with respect to the primary outcome – 69.9% of patients met MAP targets in the angiotensin cohort compared with 23.4% with placebo. Improvement in MAP led to corresponding downtitration of catchecholamine vasopressors in the intervention cohort. The intervention cohort displayed improvements in the cardiovascular SOFA, but no difference in overall SOFA at 48 hours. Mortality was quite high, regardless of group assignment, and no reliable difference was noted. Adverse events were common in each group with, again, no reliable differences detected.

This trial is mostly just interesting from a scientific awareness standpoint. The beneficial or harmful effects of angiotensin infusion are not established by these data. The enrolled population – approximately one patient every four months per site, on average – cannot be reliably generalized. As with any sponsored trial replete with conflict of interest among the authors – and particularly those with slow enrollment due to extensive exclusions – skepticism is particularly warranted. That said, this novel vasopressor clearly warrants additional study and comparative effectiveness evaluation.

“Angiotensin II for the Treatment of Vasodilatory Shock”
http://www.nejm.org/doi/full/10.1056/NEJMoa1704154

Is The Road to Hell Paved With D-Dimers?

Ah, D-dimers, the exposed crosslink fragments resulting from the cleaving of fibrin mesh by plasmin. They predict everything – and nothing, with poor positive likelihood ratios for scads of pathologic diagnoses, and limited negative likelihood ratios for others.  Little wonder, then, routine D-dimer assays were part of the PESIT trial taking the diagnosis of syncope off the rails. Now, does the YEARS study threaten to make a similar kludge out of the diagnosis of pulmonary embolism?

On the surface, this looks like a promising study. We are certainly inefficient at the diagnosis of PE. Yield for CTPA in the U.S. is typically below 10%, and some of these diagnoses are likely insubstantial enough to be false positives. This study implements a standardized protocol for the evaluation of possible PE, termed the YEARS algorithm. All patients with possible PE are tested using D-dimer. Patients are also risk-stratified for pretest likelihood of PE by three elements: clinical signs of deep vein thrombosis, hemoptysis, or “pulmonary embolism the most likely diagnosis”. Patients with none of those “high risk” elements use a D-dimer cut-off of 1000 ng/mL to determine whether they proceed to CTPA or not. If a patient has one of more high-risk features, a traditional D-dimer cut-off of 500 ng/mL is used. Of note, this study was initiated prior to age-adjusted D-dimer becoming commonplace.

Without going into interminable detail regarding their results, their strategy works. Patients ruled out solely by the the D-dimer component of this algorithm had similar 3 month event rates to those ruled out following a negative CTPA. Their strategy, per their discussion, reduces the proportion managed without CTPA by 14% over a Wells’-based strategy (CTPA in 52% per-protocol, compared to 66% based on Wells’) – although less-so against Wells’ plus age-adjusted D-dimer. Final yield for PE per-protocol with YEARS was 29%, which is at the top end of the range for European cohorts and far superior, of course, to most U.S. practice.

There are a few traps here. Interestingly, physicians were not blinded to the D-dimer result when they assigned the YEARS risk-stratification items. Considering the subjectivity of the “most likely” component, foreknowledge of this result and subsequent testing assignment could easily influence the clinician’s risk assessment classification. The “most likely” component also has a great deal of inter-physician and general cultural variation that may effect the performance of this rule. The prevalence of PE in all patients considered for the diagnosis was 14% – a little lower than the average of most European populations considered for PE, but easily twice as high as those considered for possible PE in the U.S. It would be quite difficult to generalize any precise effect size from this study to such disparate settings. Finally, considering the D-dimer assay continuous likelihood ratios, we know the +LR for a test result of 1000 ± ~500 is probably around 1. This suggests using a cut-off of 1000 may hinge a fair bit of management on a test result representing zero informational value.

This ultimately seems as though the algorithm might have grown out of a need to solve a problem of their own creation – too many potentially actionable D-dimer results being produced from an indiscriminate triage-ordering practice. I remain a little wary the effect of poisoning clinical judgment with the D-dimer result, and expect it confounds the overall generalizability of this study. As robust as this trial was, I would still recommend waiting for additional prospective validation prior to adoption.

“Simplified diagnostic management of suspected pulmonary embolism (the YEARS study): a prospective, multicentre, cohort study”
http://thelancet.com/journals/lancet/article/PIIS0140-6736(17)30885-1/fulltext

Double Coverage, Cellulitis Edition

The Infectious Disease Society Guidelines are fairly reasonable when it comes to cellulitis. Non-suppurative cellulitis – that is to say, without associated abscess or purulent drainage – is much less likely to be methicillin-resistant s. aureus. The guidelines, therefore, recommend monotherapy with a ß-lactam, typically cephalexin. Conversely, with a suppurative focus, trimethoprim-sulfamethoxazole monotherapy is an appropriate option. However, it’s reasonable to estimate current practice involves prescribing both agents somewhere between one fifth and a quarter of cases – presumably both wasteful and potentially harmful. This trial, therefore, examines this practice by randomizing patients to either double coverage or cephalexin plus placebo.

The short answer: no difference. The rate of clinical cure was a little over 80% of both cohorts in the per-protocol population. Of those with follow-up and treatment failure, over half progressed to abscess or purulent drainage on re-evaluation – and about two-thirds were cultured out as s. aureus. There was no reliable evidence, however, co-administration of TMP-SMX prevented this progression.

The really fun part of this article, however ties into the second line of their abstract conclusion:

“However, because imprecision around the findings in the modified intention-to-treat analysis included a clinically important difference favoring cephalexin plus trimethoprim-sulfamethoxazole, further research may be needed.”

This hedging stems from the fact 17.8% were excluded from the enrolled cohort for inclusion in the per-protocol analysis – and, depending on the modified intention-to-treat analysis definition, there was actually up to a 7.3% difference in failure rate favoring double coverage (76.2% vs 69.0%). This resulted from almost twice as many patients in the cephalexin monotherapy cohort taking <75% of antimicrobial therapy, missing follow-up visits, or other protocol deviations.

The best Bayesian interpretation of this finding is probably – and this is where frequentism falls apart – simply to ignore it. The pre-study odds of dramatic superiority of double coverage are low enough, and the outcome definition for the modified intention to treat cohort in question is broad enough, this finding should not influence the knowledge translation of this evidence. Stick with the IDSA soft-tissue guidelines – and one antibiotic at a time, please.  It is important to recognize – and educate patients – that about 1 in 6 may fail initial therapy, and these failures to not necessarily reflect inappropriately narrow antibiotic coverage nor therapeutic mismanagement.

“Effect of Cephalexin Plus Trimethoprim-Sulfamethoxazole vs Cephalexin Alone on Clinical Cure of Uncomplicated Cellulitis”
http://jamanetwork.com/journals/jama/article-abstract/2627970

Blood Cultures Save Lives and Other Pearls of Wisdom

It’s been sixteen years since the introduction of Early Goal-Directed Therapy in the Emergency Department. For the past decade and a half, our lives have been turned upside-down by quality measures tied to the elements of this bundle. Remember when every patient with sepsis was mandated to receive a central line? How great were the costs – in real, in time, and in actual harms from these well-intentioned yet erroneous directives based off a single trial?

Regardless, thanks to the various follow-ups testing strict protocolization against the spectrum of timely recognition and aggressive intervention, we’ve come a long way. However, there are still mandates incorporating the vestiges of such elements of care –such as those introduced by the New York State Department of Health. Patients diagnosed with severe sepsis or septic shock are required to complete protocols consisting of 3-hour and 6-hour bundles including blood cultures, antibiotics, and intravenous fluids, among others.

This article, from the New England Journal, looks retrospectively at the mortality rates associated with completion of these various elements. Stratified by time-to-completion following initiation of the 3-hour bundle within 6 hours of arrival to the Emergency Department, these authors looked at the mortality associations of the bundle elements.

Winners: obtaining blood cultures, administering antibiotics, and measuring serum lactate
Losers: time to completion of a bolus of intravenous fluids

Of course, since blood cultures are obtained prior to antibiotic administration, these outcomes are co-linear – and they don’t actually save lives, as facetiously suggested in the post heading. But, antibiotic administration was associated with a fraction of a percent of increased mortality per hour delay over the first 12 hours after initiation of the bundle. Intravenous fluid administration, however, showed no apparent association with mortality.

These data are fraught with issues, of course, relating to their retrospective nature and the limitations of the underlying data collection. Their adjusted model accounts for a handful of features, but there are still potential confounders influencing mortality of those who received their bundle completion within 3 hours as compared to those who did not.  The differences in mortality, while a hard and important endpoint, are quite small.  Earlier is probably better, but the individual magnitude of benefit will be unevenly distributed around the average benefit, and while a delay of several hours might matter, minutes probably do not.  The authors are appropriately reserved with their conclusions, however, only stating these observational data support associations between mortality and antibiotic administration, and do not extend to any causal inferences.

The lack of an association between intravenous fluids and mortality, however, raises significant questions requiring further prospective investigation. Could it be, after these years wandering in the wilderness with such aggressive protocols, the only universally key feature is the initiation of appropriate antibiotics? Do our intravenous fluids, given without regard to individual patient factors, simply harm as many as they help, resulting in no net benefit?

These questions will need to be addressed in randomized controlled trials before the next level of evolution in our approach to sepsis, but the equipoise for such trials may now exist – to complete our journey from Early Goal-Directed to Source Control and Patient-Centered.  The difficulty will be, again, in pushing back against well-meaning but ill-conceived quality measures whose net effect on Emergency Department resource utilization may be harm, with only small benefits to a subset of critically ill patients with sepsis.

“Time to Treatment and Mortality during Mandated Emergency Care for Sepsis”

http://www.nejm.org/doi/full/10.1056/NEJMoa1703058

You’ve Got (Troponin) Mail

It’s tragic, of course, no one in this generation will understand the epiphany of logging on to America Online and being greeted by its almost synonymous greeting “You’ve got mail!” But, we and future generations may bear witness to the advent of something almost as profoundly uplifting: text-message troponin results.

These authors conceived and describe a fairly simple intervention in which test results – in this case, troponin – were pushed to clinicians’ phones as text messages. In a pilot and cluster-randomized trial with 1,105 patients for final analysis, these authors find the median interval from troponin result to disposition decision was 94 minutes in a control group, as compared with 68 minutes in the intervention cohort. However, a smaller difference in median overall length of stay did not reach statistical significance.

Now, I like this idea – even though this is clearly not the study showing generalizable definitive benefit. For many patient encounters, there is some readily identifiable bottleneck result of greatest importance for disposition. If a reasonable, curated list of these results are pushed to a mobile device, there is an obvious time savings with regard to manually pulling these results from the electronic health record.

In this study, however, the median LOS for these patients was over five hours – and their median LOS for all patients receiving at least one troponin was nearly 7.5 hours. The relative effect size, then, is really quite small. Next, there are always concerns relating to interruptions and unintended consequences on cognitive burden. Finally, it logically follows if this text message derives some of its beneficial effect by altering task priorities, then some other process in the Emergency Department is having its completion time increased.

I expect, if implemented in a typically efficient ED, the net result of any improvement might only be a few minutes saved across all encounter types – but multiplied across thousands of patient visits for chest pain, it’s still worth considering.

“Push-Alert Notification of Troponin Results to Physician Smartphones Reduces the Time to Discharge Emergency Department Patients: A Randomized Controlled Trial”
http://www.annemergmed.com/article/S0196-0644(17)30317-7/abstract