Informatics for Wrong-Patient Ordering

It seems intuitive – if, perhaps, the electronic health record has an updated problem list, and the EHR knows the typical indication of various medications, then the EHR would be able to perform some cursory checks for concordance.  If the orders and the problems are not concordant – then, as these authors propose, perhaps the orders are on the wrong patient?

This study is a retrospective analysis of the authors’ EHR, in which they had previously implemented alerts of this fashion in the interests of identifying problem lists that were not current.  However, after data mining their 127,320 alerts over a 6-year period, they noticed 32 orders in which the order was immediately cancelled on one patient and re-ordered on another.  They then conclude that their problem list alert also has the beneficial side-effect of catching wrong-patient orders.

A bit of a stretch – but, it’s an interesting application of surveillance intelligence.  The good news is, at least, that their problem list intervention is successful (pubmed) – because a 0.25 in 1000 patient alert yield for wrong-patient orders would be abysmal!

“Indication-based prescribing prevents wrong-patient medication errors in computerized provider order entry (CPOE)”
www.ncbi.nlm.nih.gov/pubmed/23396543

JAMA, Integrity, Accessibility, and Social vs. Scientific Peer Review

Yesterday, I posted regarding a JAMA Clinical Evidence series article involving procalcitonin measurement to guide antibiotics stewardship.  This is an article I read, raised concerns regarding other negative trials in the same spectrum, and depressingly noted conflict-of-interest with each of the three authors.


Graham Walker, M del Castillo-Hegyi, Javier Benitez and Chris Nickson picked up the blog post, spread it through social media and Twitter, and suggested I write a formal response to JAMA for peer-reviewed publication.  My response – I could put time into such a response, but what would JAMA’s motivation be to publish an admission of embarrassing failure of peer-review?  And, whatever response they published would be sequestered behind a paywall – while BRAHMS/ThermoFisher continued to happily reprint away their evidence review from JAMA.  Therefore, I will write a response – but I will publish it openly here, on the Internet, and the social peer review of my physician colleagues will determine the scope of its dissemination based on its merits.


Again, this JAMA article concerns procalcitonin algorithms to guide antibiotic therapy in respiratory tract infections.  This is written by Drs. Schuetz, Briel, and Mueller.  They each receive funding from  BRAHMS/ThermoFisher for work related to procalcitonin assays (www.procalcitonin.com).  The evidence they present is derived from a 2012 Cochrane Review – authored by Schuetz, Mueller, Christ-Crain, et al.  The Cochrane Review was funded in part by BRAHMS/ThermoFisher, and eight authors of the review declare financial support from BRAHMS/ThermoFisher.


The Cochrane Review includes fourteen publications examining the utility of procalcitonin-based algorithms to initiate or discontinue antibiotics.  Briefly, in alphabetical order, these articles are:

  • Boudama 2010 – Authors declare COI with BRAHMS.  This is a generally negative study with regards to the utility of procalcitonin.  Antibiotic use was reduced, but mortality trends favored standard therapy and the study was underpowered for this difference to reach statistical significance (24% mortality in controls, 30% mortality in procalcitonin-guided at 60 days).
  • Briel 2008 – Authors declare COI with BRAHMS.  This study is a farce.  These ambulatory patients were treated with antibiotics for such “bacterial” conditions as the “common cold”, sinusitis, pharyngitis/tonsilitis, otitis media, and bronchitis.  
  • Burkhardt 2010 – Authors declare COI with BRAHMS.  Yet another ambulatory study randomizing patients with clearly non-bacterial infections.
  • Christ-Crain 2004 – Authors declare COI with BRAHMS.  Again, most patients received antibiotics unnecessarily via poor clinical judgement, for bronchitis, asthma, and “other”.
  • Christ-Crain 2006 – Authors declare COI with BRAHMS.  This is a reasonably enrolled study of community-acquired pneumonia patients.
  • Hochreiter 2009 – Authors declare COI with BRAHMS.  This is an ICU setting enrolling non-respiratory infections along with respiratory infections.  These authors pulled out the 47 patients with respiratory infections.
  • Kristofferson 2009 – No COI declared.  Odd study.  The same percentage received antibiotics in each group, and in 42/103 cases randomized to the procalcitonin group, physicians disregarded the procalcitonin-algorithm treatment guidelines.  A small reduction in antibiotic duration was observed in the procalcitonin group.
  • Long 2009 – No COI declared.  Unable to obtain this study from Chinese-language journal.
  • Long 2011 – No COI declared.  Most patients were afebrile.  97% of the control group received antibiotics for a symptomatic new infiltrate on CXR compared with 84% of the procalcitonin group.  85% of the procalcitonin group had treatment success, compared with 89% of the control group.  Again, underpowered to detect a difference with only 81 patients in each group.
  • Nobre 2008 – Authors declare COI with BRAHMS.  This is, again, an ICU sepsis study – with 30% of the patients included having non-respiratory illness.  Only 52 patients enrolled.
  • Schroeder 2009 – Authors declare COI with BRAHMS.  Another ICU sepsis study with only 27 patients, of which these authors pulled only 8!
  • Schuetz 2009 – Authors declare COI with BRAHMS.  70% of patients had CAP, most of which was severe.  Criticisms of this study include critique of “usual care” for poor compliance with evidence supporting short-course antibiotic prescriptions, and poor external validity when applied to ambulatory care.
  • Stolz 2007 – Authors declare COI with BRAHMS.  208 patients with COPD exacerbations only.
  • Stolz 2009 – Authors declare COI with BRAHMS.  ICU study of 101 patient with ventilator-associated pneumonia.

So, we have an industry-funded collation of 14 studies – 11 of which involve relevant industry COI.  Most studies compare procalcitonin-guided judgement with standard care – and, truly, many of these studies are straw-man comparisons against sub-standard care in which antibiotics are being prescribed inappropriately for indications in which antibiotics have no proven efficacy.  We also have three ICU sepsis studies included that discard the diagnoses other than “acute respiratory infection” – resulting in absurdly low sample sizes.  As noted yesterday, larger studies in ICU settings including 1,200 patients and 509 patients suggested harms, no substantial benefits, and poor discriminatory function of procalcitonin assays for active infection.  

Whether the science eventually favors procalcitonin, improved clinical judgement, or another biological marker, it is a failure of the editors of JAMA to publish such deeply conflicted literature.  Furthermore, the traditional publishing system is configured in such a fashion that critiques are muted compared with the original article – to the point where I expect this skeptical essay to reach a far greater audience and have a greater effect on practice patterns via #FOAMed than through the traditional route.

JAMA & Procalcitonin

Someday, I’ll publish another article summary that doesn’t involve a conflict-of-interest skewering.  I’m really not as angry as Rob Orman says I am.  This article, at least, is directly relevant to the Emergency Department.

There’s been significant research into biomarkers for infectious/inflammatory processes, with the goal of identifying a sufficiently sensitive assay to use as a “rule-out” for serious infection.  The goal is to use such an assay to prevent the overuse of antibiotics without increasing morbidity/mortality.  This is a good thing.


Procalcitonin is the latest darling of pediatrics and intensive care units.  However, to call the literature “inconclusive” is a bit of an understatement – which is why I was surprised to see an article in JAMA squarely endorsing procalcitonin-guided antibiotic-initiation strategies.  After all, I’ve previously covered negative trials in this blog (pubmedpubmed).  However, these authors seem to have intentionally narrowed their trial selection to exclude these trials – and publish no methods regarding their systematic selection of articles.


The disclosures for all three authors includes “BRAHMS/Thermofisher”.  Who is this, you might ask?  Google points me to: http://www.procalcitonin.com – where BRAHMS/Thermofisher will sell you one of seven procalcitonin assays.  JAMA, third-ranked medicine journal in Impact Factor, reduced to advertising masquerading as peer-reviewed science.


Clinical Outcomes Associated With Procalcitonin Algorithms to Guide Antibiotic Therapy in Respiratory Tract Infections”http://www.ncbi.nlm.nih.gov/pubmed/23423417

You Can Trust a Cardiologist

Or, at least, their integrity in conduct of research is unimpeachable.

Adding to the conflict of interest debate, this study from the American College of Cardiology evaluated all studies regarding “cardiovascular disease” published in the New England Journal of Medicine, Lancet, and JAMA over an eight year period.  Studies were discarded if no conflict of interest information was explicitly provided, and, eventually, 550 articles were selected for analysis.

The bad news: conflict of interest was “ubiquitous”, in the authors’ own words.  The good news:  it didn’t seem to affect positivity/negativity of the results.  In fact, the authors couldn’t identify any specific funding or COI factor associated with a preponderance of positive published trial results.

It’s a little odd these authors evaluated solely cardiovascular trials.  And, yes, these journals have the greatest impact factor – but there are plenty of trials published in a variety of other relatively prominent cardiovascular journals that might have been interesting to include.  The external validity of their study is limited by their methodology.

But, at least, for this narrowest of narrow slices, positive and negative trials abounded.  Quite unexpected, to say the least.

“Authors’ Self-Declared Financial Conflicts of Interest Do Not Impact the Results of Major Cardiovascular Trials”
www.ncbi.nlm.nih.gov/pubmed/23395075

Pooled CCTA Outcomes

The state of the art for coronary CT angiograms progressed a great deal in the past year.  Four recent studies, CT-STAT, ACRIN-PA, ROMICAT II, and a fourth by Goldstein et al., have added to our knowledge base regarding the performance characteristics of this test.

Overall, by pooling 3,266 patients from these four trials, a couple new features shake out as statistically significant.  Specifically, patients undergoing CCTA were significantly more likely (6.3% vs 8.4%) to undergo ICA, and then more likely to receive revascularization (2.6% vs. 4.6%).  This adds to what we already knew – CCTA shortens ED length of stay and reduces overall ED costs compared with “usual care”.

But, we still don’t really know if this test is improving important patient-oriented outcomes.  These intervention numbers are quite low – meaning a great number of patients simply received expensive diagnostic testing, without any sort of treatment.  Then, we don’t even know if these revascularizations are improving (or worsening!) outcomes.  Technology keeps blundering forward with its flawed disconnect from rationality – the costs go up and up, but we don’t hardly stop to measure whether we’re actually doing any good….

“Outcomes After Coronary Computed Tomography Angiography in the Emergency Department”
content.onlinejacc.org/article.aspx?articleID=1569168

The Grim World of ALTE

“The risk of subsequent mortality in infants admitted from our pediatric ED with an ALTE is substantial.”

Dire conclusions!  Doom and gloom associated with apparent life-threatening events!

This is a little bit of an odd article.  It’s a chart review of all infants aged 0 to 6 months presenting with an ALTE – including seizure, choking spell, and cyanosis.  The authors reviewed 176 charts of admitted patients, follow-up studies, and eventual mortality.

  • 111 received blood cultures – all negative.
  • 65 received lumbar puncture – all negative.
  • 113 received chest x-rays – 12 of which had infiltrates.
  • 35 received non-contrast head CT – all negative.
  • 62 were tested for RSV – 9 were positive.

So, how many infants died after their ALTE to spawn this conclusion of “substantial” mortality?

Two.

This leads to the authors to conclude this high-risk complaint requires admission.  However, each death was a generally previously healthy patient was admitted with ALTE, evaluated extensively as an inpatient, discharged from the inpatient service – and died within two weeks, regardless.  The only reasoning I can fathom for this recommendation is as a cover-your-ass strategy to prevent being the physician who “last touched” the patient when someone comes back with a lawyer. 

“Mortality after discharge in clinically stable infants admitted with a first-time apparent life-threatening event”

How to Fix Your Clinical Trial

Don’t like your study results?  Just drop the patients who inconveniently don’t benefit from treatment.  That’s what these authors have documented as occurring at Pfizer during the trials of gabapentin for off-label treatment of migraine prophylaxis, bipolar disorder, and neuropathic pain.

As part of legal proceedings against Pfizer, internal gabapentin study documents became available for public review.  These authors collected these internal study documents and attempted to correlate the study methods and enrollment between the internal documents and subsequent journal publications.  What these authors found were important irregularities between study protocols and published results.

Specifically, the authors identified at least two studies that randomized patients, but then failed to report their enrollment in subsequent publications.  In addition, even when patients were reported – the intention-to-treat analysis was altered to further exclude additional patients.  They also noted missing pre-specified safety analysis in nearly all publications, despite their present in original study protocols.

Clinicaltrials.gov and other transparency campaigns are steps in the right direction – but, clearly, those steps can be only of limited effectiveness if this sort of unethical results massaging remains rampant.

“Differences in Reporting of Analyses in Internal Company Documents Versus Published Trial Reports: Comparisons in Industry-Sponsored Trials in Off-Label Uses of Gabapentin”

Mechanical Embolectomy Kills People

Interestingly, it especially killed people who were going to have a favorable outcome with standard care.

This is MR-RESCUE, a trial evaluating the efficacy of endovascular mechanical thrombectomy for acute ischemic stroke.  Patients were eligible for this trial up to 8 hours from stroke onset, and most were enrolled because they were outside the window for tPA – or received tPA but failed to recanalize.  One of the special features of this study was using emergent MRI to identify a patient subgroup that had a potentially viable “penumbra” of brain tissue surrounding the original infarct.  The imaging hypothesis in this study was that patients would particularly benefit from mechanical intervention in the presence of a penumbra such as this.

However, they were wrong.  Oddly, the authors reported their primary outcome differences in mean mRS.  As discussed on the last blog post, mean and median mRS aren’t used in stroke trials because the profound disability/living death/death numbers at the bottom of the scale don’t represent the clinically relevant treatment effects.  Regardless, they failed to show benefit of mechanical embolectomy.

Overall, patients simply did poorly.  This is a fine example of the exquisite relationship between NIHSS and outcomes, as the median NIHSS in this trial was 17, less than 20% of the patients had good outcomes (mRS 0-2), and 21% died with in 90 days.  Looking at Figure 2, it’s clear the penumbra was an excellent prognostic feature – until the mechanical embolectomy occurred.  Then, mortality jumps from 9% to 16%, and the favorable mRS drops from 23% to 15%.

These authors used the MERCI and Penumbra systems.  You might already be familiar with the MERCI retriever from earlier, negative trials with significant device complications.  Someday we might see the last of it – but, I’m guessing, where there’s already money sunk into a device, there’s more patient harms to come.

“A Trial of Imaging Selection and Endovascular Treatment for Ischemic Stroke”

Statistics For tPA & Profit

IST-3 broke new ground for misleading statistics in stroke trials with its secondary ordinal analysis, demonstrating “benefit” in the presence of an overall negative, open-label, randomized trial of over 3,000 patients.  Now, who here can decipher the Cochran-Mantel-Haenszel test?  Team Genentech/Boehringer Ingelheim is hoping you can’t.

This is a retrospective, observational cohort from the Virtual International Stroke Trials Registry looking for a way around the pesky “contraindications” to tPA treatment that currently prevent large groups of patients from receiving treatment.  These authors pulled non-treated “controls” from the cohort along with tPA-treated patients who had at least one contraindication or warning to tPA use and compared mRS outcomes at 90 days.  The control group was significantly older and sicker – though their strokes were milder (NIHSS 12 [SD 9] vs. NIHSS 14 [SD 8]) – but the authors adjusted only for age and NIHSS.  Their conclusion?

“Many of the warnings and contraindications of alteplase may not be justified and licences, guidelines, and protocols should be adjusted to accommodate recent data from registries and real-world experience.”

As the authors correctly note earlier in the paper, observational data combined from heterogenous trials spread over many years is likely rife with differences in care and selection bias.  This alone renders their results invalid as anything but hypothesis-generating, rather than practice-changing.

The other issue is their primary outcome and statistical tool of choice:

“The primary outcome measure was the mRS at 90 days, analyzed across the whole distribution of scores with the use of the Cochran–Mantel–Haenszel test,”

Here’s an example of an unadjusted mRS distribution “favoring” alteplase:

Just at first glance, looks pretty much the same – perhaps even favors placebo (top bar).  How the heck is this significantly positive towards tPA?  Well, the CMH takes into all ordinal levels – even the perturbations between disability/living death/death at the bottom of the scale – as opposed to just the clinically relevant mRS 0-1 or mRS 0-2 that were primary outcomes in other stroke trials.  So, the statistical significance in this test has nothing to do with the clinical efficacy of the treatment in question.  Then, the adjusted OR of 1.29 (95% CI 1.00 – 1.66) is somehow based on a mélange of “dichotomized outcomes at 90 days (mRS 0–1, mRS 0–2, NIHSS 0–1, and mortality)”.

I’m afraid this simply looks like the authors dragged the VISTA data through every permutation of statistical tools possible until they found a test and a combined endpoint for logistic regression that came out in favor of tPA.  And, then, sold it as practice-altering.

Disheartening.

Here’s the disclosures, for your reading pleasure:
BF has received modest honoraria for participation in clinical trials (Sanofi-Aventis). AVA has served as PI of CLOTBUST trial partially funded by Genentech, and currently serves as PI of CLOTBUSTER funded by Cerevast Therapeutics and consultant to Genentech. EB is an employee of Boehringer Ingelheim. JCG, CW, PL, and NKM have no relevant conflicts of interest. AM has served as a consultant for Boehringer Ingelheim, received speaker’s honoraria from Boehringer Ingelheim, and congress expenses from Lundbeck. NW has received expenses from Boehringer Ingelheim for his role as member of the steering committee in relation to the ECASS III trial with alteplase and served as a consultant to Thrombogenics as chairman of the DSMB. SITS International (chaired by NW) received a grant from Boehringer Ingelheim and from Ferrer for the SITS-MOST/SITS- ISTR. His institution has also received grant support toward administrative expenses for coordination of the ECASS III trial. NW has also received lecture fees from Boehringer Ingelheim and from Ferrer. AS and KRL have received research grants, modest honoraria for participation in clinical trials, and have a consultancy or advisory board relationship with manufacturers from drugs (including Boehringer Ingelheim). KRL administered the grant from Genentech for support of this study.

“Thrombolysis in Stroke Despite Contraindications or Warnings?”
stroke.ahajournals.org/…/STROKEAHA.112.674622.abstract

Lidocaine for Renal Colic

Just when you think you’ve heard it all – something new and different.  This is a randomized, blinded trial of morphine vs intravenous lidocaine for the management of flank pain associated with ureterolithiasis.  Why, do you ask?  Because, in Iran, they don’t have ketorolac as an option for renal colic.

Patients were enrolled based on clinical symptoms, and ureterolithiasis was confirmed by plain radiographs.  240 patients in generally well-balanced groups were enrolled; all patients received 0.15mg/kg IV metoclopramide for nausea, and then the groups were randomized to 0.1mg/kg of morphine vs. 1.5 mg/kg of intravenous lidocaine.  Pain outcomes were measured by visual analog scale at 5, 10, 15, and 30 minutes – and the short answer is, they both worked, and neither treatment had terribly significant side effects.


I am all for expanding the toolbox for Emergency Medicine, however unusual the idea might be.  After all, every so often, you might need an alternative agent for that extra-special patient who is allergic to ketorolac, morphine, acetaminophen, ondansetron, fentanyl, and the color blue….


“Effectiveness of intravenous lidocaine versus 
intravenous morphine for patients with renal colic in the emergency department”