Open the Data

There is a committee within the Institute of Medicine charged with examining the issues associated with data sharing after randomized controlled trials.

Data-sharing, without question, reflects a dire need.  From companies behaving badly – such as Merck with Vioxx, or Roche with Tamiflu – to inadvertent errors in analysis, protecting the health of patients requires more than simple peer review of documents prepared for pharmaceutical corporations by medical communication professionals.

Jeff Drazen, in this editoral, makes a call for feedback to the IOM.  Oddly, his main concern is – how long ought the original authors of a study be allowed exclusive access to trial data?  Would open data disincentivize researchers to perform clinical investigations, knowing their academic and commercial benefit would likely be curtailed?  On the flip side, we have seen publication of trial data be massively delayed – see ATLANTIS Part A, withheld for seven years – by pharmaceutical companies concerned with protecting their business interests.

It is a complicated and subtle issue, to be sure, but appropriate transparency is almost certainly an improvement over the current situation.  Full details, and how to leave feedback, are at:

“Open Data” (open access)

Intermediate Lactate Values, Lowering the Bar for Cryptic Shock

A guest post by Rory Spiegel (@CaptainBasilEM) who blogs on nihilism and the art of doing nothing at

Serum lactate has been the darling of Emergency Medicine/Critical Care since Manny Rivers first introduced EGDT to the Emergency Department. Since then we have used it as a screening tool, a means to guide therapy and even to prognosticate outcomes. Despite our universal acceptance of its utility, very little high quality data has been published on its diagnostic properties. I reviewed this evidence in more depth in a past post and will limit this to the question, “Can serum lactate identify a group of patients who are in cryptic shock, despite clinically appearing well?” The Surviving Sepsis Campaign recommends using a lactate level of 4 mmol/L as the threshold for identifying cryptic shock, but lactate has a continuous curvilinear association with mortality and a 4 mmol/L threshold seems like an arbitrary cutoff.

In an attempt to answer this question Puskarich et al conducted a systematic review, published in the Journal of Critical Care, examining the ability of intermediate lactate values (2.0-3.9 mmol/L) to predict cryptic shock and death. Eight studies were included in this review. A total of 11,062 patients with intermediate lactate levels were examined. The authors appropriately decided that given the heterogeneity of these datasets, a formal meta-analysis was not appropriate. Instead they settled for descriptive statistics of each individual trial. In summary they found patients with intermediate lactate values who were normotensive had a 30 day mortality rate of 14.9% (mortality in individual trials ranged from 3.2-16.4%). Obviously the patients with intermediate lactate levels that were concurrently hypotensive fared far worse (30 day mortalities of 35-37%).

This review fails to define the clinical utility of the association between elevated lactate levels and risk of death.  In the few studies included in this review which published diagnostic test characteristics, lactate performed surprisingly poorly. Howell et al found lactate had an AOC of 0.71 for predicting 30 day mortality. Shapiro et al reported a similar AOC of 0.67. In fact in the Shapiro study when a cutoff of 2.5 mmol/L was used as screening tool for cryptic shock, it had a sensitivity of 59% and a specificity of 71%. Even a threshold of 4 mmol/L though very specific (92%) had a sensitivity of 36%, a far lower sensitivity than one would be traditionally accepted for a screening test.

More importantly, this data does not allow us to determine how a lactate threshold of 2.5 mmol/L  performs in the true cryptic shock patient. This is the patient who has end organ hypoperfusion without any clinically obvious signs. In most of the patients with elevated lactates, they appear clinically ill and thus the lactate is only confirming what we already know, that this patient needs aggressive intervention. If lactate is to prove useful as a true screening tool (at whatever threshold), it should be able to identify the patient clandestinely experiencing septic shock before any obvious signs of of end-organ damage (AMS, hypotension, AKI) become apparent. Unfortunately we have little data supporting its use in this manner. Even the secondary analysis of the Jones trial, finding similar mortalities between hypotensive patients and normotensive patients with elevated lactate (above 4mmol/L), fails to impress. Although the cryptic shock patients were not hypotensive in the strictest sense, they were by no means physiologically normal. On the contrary they were older, more tachycardic, with faster respiratory rates, and experienced significantly more intra-abdominal infections (30% vs 16%) than their hypotensive counterparts. And though they were not hypotensive (<90 mmHg), their blood pressures were not necessarily normal. The median blood pressure in the cryptic shock group was 108mmHG with an IQR of 92-126. To put it simply, these patients were sick. They did not require a lactate level to identify them as in need of aggressive therapy. There was nothing cryptic about them….

“Prognosis of Emergency Department Patients with Suspected Infection and Intermediate Lactate Levels: A Systematic Review”

Zero-Miss or High-Yield for Appendicitis?

In a persistently befuddling contradiction, the same specialty that sometimes needs to be physically restrained from pan-CT-ing every trauma patient is simultaneously concerned about the negative appendectomy rate.  Maximum sensitivity in one instance, maximum specificity in the other.

An avenue that has been bludgeoned to death many a time is the utility of the WBC for diagnosis of appendicitis.  This study in Pediatrics out of U.C. Davis, again, attempts to establish test-thresholds for WBC to with the ultimate goal of reducing the negative appendectomy rate.  At U.C. Davis, similar to national rates, 2.6% of children taken for appendectomy were demonstrated to have a normal appendix.  They observe that neutrophil counts and overall WBC counts were within the normal range in over 80% of these patients, and describe a potential management strategy to improve their negative appendectomy rate.  For WBC <9,000 or <8,000, the negative appendectomy rate could be improved to 0.6% or 1.2% – as long as the surgeons were content with a sensitivity of 92% to 95%.

Thus the conundrum.  How many cases of appendicitis are you willing to allow to progress to perforation – associated with not-insignificant morbidity – in order to minimize the negative appendectomy rate?  Considering up to 20% of appendicitis will have a normal WBC count – despite addressing an important problem – the solution presented by these leukocytosis cut-offs does not appear to provide the ultimate answer.

“Use of White Blood Cell Count and Negative Appendectomy Rate”

Can We Escape Antibiotics in Sore Throat?

Yes.  And no.

It is well-established complications of acute sore throat are incredibly rare.  The likelihood of a patient developing the most concerning of suppurative complications – a peritonsillar abscess or “quinsy” – is less than a fraction of a percent.  Rheumatic fever is virtually eliminated in the United States.  Yet, as we see from this British cohort, over half of patients visiting primary care received a prescription for antibiotics.

This is study reports on a combination of several, prospectively gathered cohorts presenting with acute sore throat to British primary care practices.  Comprising 14,610 adults, only 5,243 escaped the physicians office without an antibiotic prescription, while the remainder received immediate or delayed antibiotics.  Suppurative complications across all cohorts – peritonsillar abscess, sinusitis, otitis media, and cellulitis – ranged from 0.1% to 0.6%.

Unfortunately, this is not a randomized trial – the patients who were given antibiotics by their physician had much more severe initial clinical presentations.  This means, unfortunately, there is no information in this data set describing the actual protective effect of antibiotics without making statistical contortions.  The main value, however, is in describing the futility of clinical judgement for selecting patients for antibiotics.  Of all the various clinical features recorded prospectively for each patient, only severe ear pain and severely inflamed tonsils were significant predictors of suppurative complications – with ORs of 3.02 and 1.92, respectively.  However, these still constituted hundreds of patients with symptoms who otherwise did not progress.  High scores on the Centor and FeverPAIN criteria were similarly, minimally predictive.

In the end, it is ultimately apparent antibiotics confer some protective effect.  The absolute benefit, however, will represent just a handful of patients out of thousands.  The authors sum it up just as nicely as I might:

“Since a policy of liberal antibiotic prescription for sore throat to prevent complications is highly unlikely to be cost effective, and clinicians cannot rely on clinical targeting to predict most complications, clinicians will need to rely on strategies such as safety netting or delayed prescription in managing the low risk of suppurative complications.”

“Predictors of suppurative complications for acute sore throat in primary care: prospective clinical cohort study” (open access)

ACEP Clinical Policy on tPA Up For Comment

At ACEP13, the Council voted to reconsider the clinical policy statement regarding tPA for acute ischemic stroke.  As part of that resolution, the policy was to be opened up for a sixty-day public comment period.
And, the moment you’ve all been waiting for – it’s up!  Follow this link to read more, download the clinical policy statement, and leave your comments:

Ignore This Fixed-Dose Philosophy For Morphine

Emergency physicians are legendary for poor control of pain in the Emergency Department, with many factors challenging optimal care.  One solution – and one I’ve taught in the ED – is to use a weight-based approach to dosing.  This works out to 0.1-0.15 mg/kg of morphine or equivalent as an initial starting dose.

These authors gather 300 patients – 100 non-obese, 100 obese, and 100 morbidly obese – who all received a 4mg intravenous dose of morphine for median initial pain levels of ~8 on a scale to 10.  Upon reassessment a median of ~1 hour after administration, the median pain level in all groups had fallen to 2 or 3.  This somewhat tailors along with other work, which observed substantial numbers of patients with adequate response following even doses of morphine of less than 4mg.  The authors therefore conclude:

“BMI does not predict the analgesic response to a single dose of intravenous morphine in the ED. This is true even for patients who are morbidly obese. We suggest using fixed doses rather than weight-based doses of morphine for acute pain in obese patients.”

However, this retrospective study fails to capture and control for many of the other factors associated with opiate response, including age, substance abuse history, pre-hospital pain control – along with all the other contextual factors lost through chart abstraction.  Additionally, patients at Maricopa Medical Center in Phoenix are not hardly generalizable to, well, nearly anywhere in the world.

Ultimately, this limited study leads to an erroneous, and potentially harmful conclusion that weight-based doses are unnecessary.  Aggressive, titrated or weight-based, pain control is not in any fashion refuted by this work.

“Analgesic response to morphine in obese and morbidly obese patients in the emergency department.”

FDA: The Black Knight

… specifically, the Black Knight from Monty Python, apparently reduced to nibbling impotently at the feet of pharmaceutical corporations as they sail through the approval process.

This study in JAMA reviews the characteristics of novel therapeutics approved by the FDA between 2005 and 2012.  These authors identified 188 novel agents approved for 206 indications, and describe an entire host of fascinating data regarding the trials supporting approval.  A few of the most damning pearls:

  • 37% of indications were approved on the basis of a single trial.
  • The median number of patients per trial was 760.
  • 49% of trials used only surrogate outcomes.
  • Surrogate outcome trials constitute sole basis of approval for 91 indications.
  • Only 48% of cancer trials were randomized, and only 27% were double-blinded.
  • 40 trials were part of accelerated approval, 39 of which used surrogate outcomes, with a median number of patients of 157.

The data go on and on.

Considering many landmark trials could not be independently reproduced, even with the help of the original researchers; most published research findings are false; and half of what you know is wrong – we might as well just dump poison in our water supply.  It’s cheaper than suffering the next blockbuster drug for which pharmaceutical companies engineer an indication through distorted trial design.

“Clinical Trial Evidence Supporting FDA Approval of Novel Therapeutic Agents, 2005-2012“ (open access)

“Author Insights: Quality of Evidence Supporting FDA Approval Varies”

The Great Sugar Wars of Pediatric Critical Care

A guest post by Rory Spiegel (@CaptainBasilEM) who blogs on nihilism and the art of doing nothing at

Kids are just small adults, or so says the Control of Hyperglycemia in Pediatric Intesive Care (ChiPS) trial. This impressively large RCT of 1369 pediatric ICU patients (under 16 years old) requiring at least 12 hours of vasoactive support and mechanical ventilation, examined how controlling blood glucose levels affects outcomes. Subjects were randomized to either tight glucose control (72-126mg/dL) or conventional control (less than 216 mg/dL). Patients were followed for 30 days to see if mortality and rates of ventilator dependence differed between the two groups.

Simply put the trial was negative. Though the tight glucose control group received more insulin and had lower mean daily blood glucose levels during the first 10 days after randomization, there was no statistical difference between days alive and off the ventilator between the two groups. Patients in the tight glycemic control group were less likely to receive renal replacement therapy (an odds ratio of 0.64 CI 0.45-0.89), but conversely were far more likely to suffer an episode of severe hypoglycemia (below 36mg/dL) with an absolute difference of 4.8%.

Unfortunately thanks to the authors’ spectacular display of subgroup analysis there is nothing simple about this publication. 60% of the population was admitted to the ICU after cardiac surgery. The remaining 40% were there for other reasons, though further details were not specified. A multitude of endpoints in both the cardiac and non-cardiac subgroups were examined. As with the entire cohort, there was no difference in mortality or ventilator-free days in either subgroup.  The authors did observe a decrease in length of stay and mean healthcare costs in the subgroup of patients who did not undergo cardiac surgery and were treated using the tight glycemic parameters. 

Though the authors conclude that these findings are at best hypothesis building and should not be used to guide therapy, this subgroup analysis will inevitably be misinterpreted, suggesting that pediatric ICU patients who have not undergone cardiac surgery will benefit from a strict glycemic regimen. This is clearly not the case. What this trial amounts to is a negative study with both negative primary and secondary endpoints that upon subgroup analysis uncovered statistical differences equally likely to be caused by chance as by the aggressive glucose management. 

This trial is a reminder of our continued insistence of applying disease-oriented outcomes with questionable efficacy over the long term to an acutely ill population. The NICE-SUGAR trial established that tight glucose control was detrimental in an acutely ill adult population, the ChiP trial has demonstrated these lessons can now be applied to our smaller counterparts.

“A Randomized Trial of Hyperglycemic Control in Pediatric Intensive Care”

Dantrolene: Saving Canadian Pigs in Ventricular Fibrillation

Just a quick highlight of interesting translational research, this time aimed at improving survival post-ventricular fibrillation.

These authors hypothesized that, as VF is associated with impaired intracellular calcium cycling, perhaps blockade of an intracellular pathway may reduce refractoriness to defibrillation.  The agent?  Dantrolene – which acts upon the ryanodine receptor of the sarcoplasmic reticulum.

Twenty-six Yorkshire pigs had VF induced, subsequently received dantrolene or normal saline, and finally CPR and defibrillation.  85% of dantrolene-treated pigs were successfully defibrillated compared with 39% of controls, and all dantrolene-treated pigs remained in organized rhythm.  An ex vivo rabbit-heart model also showed similar physiologic effects.

Perhaps dantrolene has a future as a component of ACLS protocols – only time, and further study, will tell.

“Dantrolene Improves Survival Following Ventricular Fibrillation by Mitigating Impaired Calcium Handling in Animal Models”

More Bleeding Nightmares

Do you like managing bleeding on aspirin?  How about aspirin plus clopidogrel?  What would you say to aspirin + clopidogrel + vorapaxar?

Vorapaxar, a selective antagonist for protease-activated receptor 1, is the next proposed layer of anti-thrombotic prevention for high-risk cardiovascular patients.  Just this week, back from the dead, it received a favorable review from an FDA panel tasked with examining its application for approval.

The subject of both TRACER and TRA 2°P-TIMI 50, vorapaxar may soon bless your Emergency Department with a roughly 60% increase (2% vs. 3.5%) the risk of GUSTO moderate or severe bleeding.  What’s most fascinating about this drug is, technically, both trials were negative for the primary endpoint, and TRACER was stopped early after interim safety review.  However, a pre-specified and pre-allocated subgroup from TRA 2°P-TIMI 50 for patients with recent MI – and no history of stroke – showed benefit.

Of course, as is standard for these sorts of cardiovascular trials, it showed benefit primarily in the questionable combined endpoints – and, likewise, was only safe in the narrowest slicing and dicing of favorable endpoints and bleeding outcomes.

It should be of no surprise to anyone most authors are being substantially enriched by multiple drug companies.  I’m certain whatever foot-in-the-door Merck receives will be enough to extract the necessary healthcare dollars from the system for minimal benefit – a net NNT for mortality of ~450.

Oh, and as the great Tom Deloughery (@bloodman) writes:

“Hmmm – a competitive platelet inhibitor with a T1/2 of ~300hrs!  So, sounds like it will inhibit any platelets you give for the next 12 days … I guess Dr. Radecki will be holding direct pressure for a long time!”

“Vorapaxar for secondary prevention of thrombotic events for patients with previous myocardial infarction: a prespecified subgroup analysis of the TRA 2°P-TIMI 50 trial”