Proto Magazine Letter

My recently-published short invited response in Proto Magazine, a Massachusetts General Hospital publication, to an article on the state of current medical journals:  “Probing Deeper

They did, however, unexpectedly edit out a portion of my response – an entire paragraph originally between the current 2nd and 3rd paragraphs:

In 2005-2006, The Lancet derived 41% of its revenue through sales of over 11 million reprints.[1]  The NEJM, which published more industry-funded studies thanThe Lancet – 78% vs. 58% – undoubtedly derives even more.[2]  Ironically, Jeffery Drazen, editor-in-chief of NEJM, is quoted as saying “Our most important job is vetting information.”  Dr. Drazen infamously failed to do so when privy to information regarding increased mortality in rofecoxib’s (Vioxx) VIGOR trial – a publication for which NEJM sold Merck 900,000 reprints.[3][4]

And, here are my references:

1.  Dorsey ER, George BP, Dayoub, EJ, Ravina BM.  Finances of the publishers of the most highly cited US medical journals.  J Med Libr Assoc. 2011 Jul;99(3):255-8.
2. Lundh A, Barbateskovic M, Hróbjartsson A, Gøtzsche PC.  Conflicts of interest at medical journals: the influence of industry-supported randomised trials on journal impact factors and revenue – cohort study.  PLoS Med. 2010 Oct;7(10):e1000354.
3.  Armstrong D.   Bitter pill: how the New England Journal missed warning signs on Vioxx.  Wall Street Journal 2006 May 15:A1.
4.  Smith R.  Lapses at the New England Journal of Medicine.  J R Soc Med. 2006 Aug;99(8):380-2. 

EM Lit of Note on KevinMD.com

Featured today as a guest blog, revisiting the JAMA Clinical Evidence synopsis critiqued last month on this blog, here and here.

It’s rather an experiment in discovering just how influential social media has become – open access, crowdsourced “peer review” – and whether this mechanism for addressing conflict-of-interest in the prominent medical journals is more effective than simply attempting a letter to the editor.

KevinMD.com – “The filtering of medical evidence has clearly failed

JAMA, Integrity, Accessibility, and Social vs. Scientific Peer Review

Yesterday, I posted regarding a JAMA Clinical Evidence series article involving procalcitonin measurement to guide antibiotics stewardship.  This is an article I read, raised concerns regarding other negative trials in the same spectrum, and depressingly noted conflict-of-interest with each of the three authors.


Graham Walker, M del Castillo-Hegyi, Javier Benitez and Chris Nickson picked up the blog post, spread it through social media and Twitter, and suggested I write a formal response to JAMA for peer-reviewed publication.  My response – I could put time into such a response, but what would JAMA’s motivation be to publish an admission of embarrassing failure of peer-review?  And, whatever response they published would be sequestered behind a paywall – while BRAHMS/ThermoFisher continued to happily reprint away their evidence review from JAMA.  Therefore, I will write a response – but I will publish it openly here, on the Internet, and the social peer review of my physician colleagues will determine the scope of its dissemination based on its merits.


Again, this JAMA article concerns procalcitonin algorithms to guide antibiotic therapy in respiratory tract infections.  This is written by Drs. Schuetz, Briel, and Mueller.  They each receive funding from  BRAHMS/ThermoFisher for work related to procalcitonin assays (www.procalcitonin.com).  The evidence they present is derived from a 2012 Cochrane Review – authored by Schuetz, Mueller, Christ-Crain, et al.  The Cochrane Review was funded in part by BRAHMS/ThermoFisher, and eight authors of the review declare financial support from BRAHMS/ThermoFisher.


The Cochrane Review includes fourteen publications examining the utility of procalcitonin-based algorithms to initiate or discontinue antibiotics.  Briefly, in alphabetical order, these articles are:

  • Boudama 2010 – Authors declare COI with BRAHMS.  This is a generally negative study with regards to the utility of procalcitonin.  Antibiotic use was reduced, but mortality trends favored standard therapy and the study was underpowered for this difference to reach statistical significance (24% mortality in controls, 30% mortality in procalcitonin-guided at 60 days).
  • Briel 2008 – Authors declare COI with BRAHMS.  This study is a farce.  These ambulatory patients were treated with antibiotics for such “bacterial” conditions as the “common cold”, sinusitis, pharyngitis/tonsilitis, otitis media, and bronchitis.  
  • Burkhardt 2010 – Authors declare COI with BRAHMS.  Yet another ambulatory study randomizing patients with clearly non-bacterial infections.
  • Christ-Crain 2004 – Authors declare COI with BRAHMS.  Again, most patients received antibiotics unnecessarily via poor clinical judgement, for bronchitis, asthma, and “other”.
  • Christ-Crain 2006 – Authors declare COI with BRAHMS.  This is a reasonably enrolled study of community-acquired pneumonia patients.
  • Hochreiter 2009 – Authors declare COI with BRAHMS.  This is an ICU setting enrolling non-respiratory infections along with respiratory infections.  These authors pulled out the 47 patients with respiratory infections.
  • Kristofferson 2009 – No COI declared.  Odd study.  The same percentage received antibiotics in each group, and in 42/103 cases randomized to the procalcitonin group, physicians disregarded the procalcitonin-algorithm treatment guidelines.  A small reduction in antibiotic duration was observed in the procalcitonin group.
  • Long 2009 – No COI declared.  Unable to obtain this study from Chinese-language journal.
  • Long 2011 – No COI declared.  Most patients were afebrile.  97% of the control group received antibiotics for a symptomatic new infiltrate on CXR compared with 84% of the procalcitonin group.  85% of the procalcitonin group had treatment success, compared with 89% of the control group.  Again, underpowered to detect a difference with only 81 patients in each group.
  • Nobre 2008 – Authors declare COI with BRAHMS.  This is, again, an ICU sepsis study – with 30% of the patients included having non-respiratory illness.  Only 52 patients enrolled.
  • Schroeder 2009 – Authors declare COI with BRAHMS.  Another ICU sepsis study with only 27 patients, of which these authors pulled only 8!
  • Schuetz 2009 – Authors declare COI with BRAHMS.  70% of patients had CAP, most of which was severe.  Criticisms of this study include critique of “usual care” for poor compliance with evidence supporting short-course antibiotic prescriptions, and poor external validity when applied to ambulatory care.
  • Stolz 2007 – Authors declare COI with BRAHMS.  208 patients with COPD exacerbations only.
  • Stolz 2009 – Authors declare COI with BRAHMS.  ICU study of 101 patient with ventilator-associated pneumonia.

So, we have an industry-funded collation of 14 studies – 11 of which involve relevant industry COI.  Most studies compare procalcitonin-guided judgement with standard care – and, truly, many of these studies are straw-man comparisons against sub-standard care in which antibiotics are being prescribed inappropriately for indications in which antibiotics have no proven efficacy.  We also have three ICU sepsis studies included that discard the diagnoses other than “acute respiratory infection” – resulting in absurdly low sample sizes.  As noted yesterday, larger studies in ICU settings including 1,200 patients and 509 patients suggested harms, no substantial benefits, and poor discriminatory function of procalcitonin assays for active infection.  

Whether the science eventually favors procalcitonin, improved clinical judgement, or another biological marker, it is a failure of the editors of JAMA to publish such deeply conflicted literature.  Furthermore, the traditional publishing system is configured in such a fashion that critiques are muted compared with the original article – to the point where I expect this skeptical essay to reach a far greater audience and have a greater effect on practice patterns via #FOAMed than through the traditional route.

You Can Trust a Cardiologist

Or, at least, their integrity in conduct of research is unimpeachable.

Adding to the conflict of interest debate, this study from the American College of Cardiology evaluated all studies regarding “cardiovascular disease” published in the New England Journal of Medicine, Lancet, and JAMA over an eight year period.  Studies were discarded if no conflict of interest information was explicitly provided, and, eventually, 550 articles were selected for analysis.

The bad news: conflict of interest was “ubiquitous”, in the authors’ own words.  The good news:  it didn’t seem to affect positivity/negativity of the results.  In fact, the authors couldn’t identify any specific funding or COI factor associated with a preponderance of positive published trial results.

It’s a little odd these authors evaluated solely cardiovascular trials.  And, yes, these journals have the greatest impact factor – but there are plenty of trials published in a variety of other relatively prominent cardiovascular journals that might have been interesting to include.  The external validity of their study is limited by their methodology.

But, at least, for this narrowest of narrow slices, positive and negative trials abounded.  Quite unexpected, to say the least.

“Authors’ Self-Declared Financial Conflicts of Interest Do Not Impact the Results of Major Cardiovascular Trials”
www.ncbi.nlm.nih.gov/pubmed/23395075

How to Fix Your Clinical Trial

Don’t like your study results?  Just drop the patients who inconveniently don’t benefit from treatment.  That’s what these authors have documented as occurring at Pfizer during the trials of gabapentin for off-label treatment of migraine prophylaxis, bipolar disorder, and neuropathic pain.

As part of legal proceedings against Pfizer, internal gabapentin study documents became available for public review.  These authors collected these internal study documents and attempted to correlate the study methods and enrollment between the internal documents and subsequent journal publications.  What these authors found were important irregularities between study protocols and published results.

Specifically, the authors identified at least two studies that randomized patients, but then failed to report their enrollment in subsequent publications.  In addition, even when patients were reported – the intention-to-treat analysis was altered to further exclude additional patients.  They also noted missing pre-specified safety analysis in nearly all publications, despite their present in original study protocols.

Clinicaltrials.gov and other transparency campaigns are steps in the right direction – but, clearly, those steps can be only of limited effectiveness if this sort of unethical results massaging remains rampant.

“Differences in Reporting of Analyses in Internal Company Documents Versus Published Trial Reports: Comparisons in Industry-Sponsored Trials in Off-Label Uses of Gabapentin”

What Are “Trustworthy” Clinical Guidelines?

This short article from JAMA and corresponding study from Archives is concerned with advising practicing clinicians on how to identify which clinical guidelines are “trustworthy”.  This is a problem – because most aren’t.  

The JAMA article paraphrases the eight critical elements in the 2008 Institute of Medicine report required to generate a “trustworthy” article, such as systematic methodology, appropriate stakeholders, etc.  Most prominently, however, several deal specifically with transparency, including this paraphrased bullet point:

  • Conflicts of interest:  Potential guideline development group members should declare conflicts. None, or at most a small minority, should have conflicts, including services from which a clinician derives a substantial proportion of income. The chair and co-chair should not have conflicts. Eliminate financial ties that create conflicts.

The Archives article cited by the JAMA article reviews over 100 published guidelines for compliance with the IOM.  The worst performance, by far, was compliance with conflicts of interest, and notes that 71% of committee chairpersons and 90.5% of committee co-chairpersons declared COI – when declarations were explicitly stated at all.  Overall, less than half of clinical guidelines met more than half of the IOM recommendations for “trustworthiness”.

Sadly, another dismal addition to the all-too-frequent narrative describing the rotten foundation of modern medical practice.

How to Decide Whether a Clinical Practice Guideline Is Trustworthy”
www.ncbi.nlm.nih.gov/pubmed/23299601

Failure of Clinical Practice Guidelines to Meet Institute of Medicine Standards”
www.ncbi.nlm.nih.gov/pubmed/23089902

Don’t Believe The Data

This NEJM study published a couple days ago addresses the effect of funding and methodological rigor on physicians’ confidence in the results.  It’s a prospective, mailed and online survey of board-certified Internal Medicine physicians, in which three studies of low, medium, and high rigor were presented with three different funding sources: none, NIH award, or industry funding.

Thankfully, physicians were less confident and less likely to prescribe the study drug for studies that were of low methodological quality and were funded by industry.  Or, so I think.  The study authors – and the accompanying editorial – take issue with the harshness with which physicians judge industry funded trials.  They feel that, if a study is of high methodological quality, the funding source should not be relevant, and we should “Believe the Data“.  Considering how easy it is to exert favorable effects on study outcomes otherwise invisible to ClincalTrials.gov and the “data”, I don’t think it is safe or responsible to be less skeptical of industry-funded trials.

Entertainingly, their study probably doesn’t even meet their definition of high rigor, considering the 50% response rate and small sample size….

“A Randomized Study of How Physicians Interpret Research Funding Disclosures”
www.ncbi.nlm.nih.gov/pubmed/22992075

Longer Resuscitation “Saves”

This article made the rounds a couple weeks ago in the news media, probably based on the conclusion from the abstract stating “efforts to systematically increase the duration of resuscitation could improve survival in this high-risk population.”


They base this statement off a retrospective review of prospectively gathered standardized data from in-hospital cardiac arrests.  Comparing 31,000 patients with ROSC following an initial episode of cardiac arrest with a cohort of 33,000 who did not have ROSC – the authors found that patients who arrested at hospitals with higher median resuscitation times were more likely to have ROSC.  Initial ROSC was tied to survival to discharge, where hospitals with the shortest median resuscitation time having a 14.5% adjusted survival compared to 16.2% at hospitals with the longest resuscitations.


Now, if you’re a glass half-full sort of person, “could improve survival” sounds like an endorsement.  However, when we’re conjuring up hypotheses and associations from retrospective data, it’s important to re-read every instance of “could” and “might” as “could not” and “might not”.  They also performed a horde of patient-related covariates, which gives some scope of the difficulty of weeding out a significant finding from the confounders.  The most glaring difference in their baseline characteristics was the 6% absolute difference in witnessed arrest – which if not accounted for properly could nearly explain the entirety of their outcomes difference.


It’s also to consider the unintended consequences of their statement.  What does it mean to continue resuscitation past the point it is judged clinically appropriate?  What sort of potentially well-meaning policies might this entail?  What are the harms to other patients in the facility if nursing and physician resources are increasingly tied up in (mostly) futile resuscitations?  How much additional healthcare costs will result from additional successful ROSC – most of whom are still not neurologically intact survivors?


“Duration of resuscitation efforts and survival after in-hospital cardiac arrest: an observational study

www.thelancet.com/journals/lancet/article/PIIS0140…9/abstract

How Preposterous News Propagates

Every so often – perhaps more frequently, if you’re continuously canvassing the literature – there’s a rapturous press release regarding a new medical innovation that seems too good to be true.  And, you wonder, how does the lay media get it so wrong?

This study reviewed a consecutive convenience sample of published literature, looking for articles resulting in press releases.  Then, they looked for elements of the article that made it into the press release, as well as the relative accuracy of the release compared with the overall findings of the article.  Essentially, what they found is that press releases were most likely to have “spin” if the conclusion of the article abstract misrepresented the study findings with “spin”.

The authors also have an interesting summary of the sort of “spin” found in abstracts that misrepresent study findings.  These include:

 • No acknowledgment of nonstatistically significant primary outcome
 • Claiming equivalence when results failed to demonstrate a statistically significant difference
 • Focus on positive secondary outcome
 • Nonstatistically significant outcome reported as if they were significant

…and several others.

“Misrepresentation of Randomized Controlled Trials in Press Releases and News Coverage: A Cohort Study”

Could Ordering Reprints Help You Get Published?

Medical journals, to a certain extent, require independent sustainable business models.  The full-time editorial staff, the administrative personnel, and the printing costs must be defrayed by elements such as advertising, subscription fees, or other largess.  One of these sources of largess – particularly for journals with high impact factors – is the ordering of reprints.  After gifts, the major promotional material circulated by pharmaceutical companies among physicians is reprints of publications.

This recent study in the BMJ queried the most prominent medical journals regarding their reprints, hoping to gauge the scope of the reprint requests, as well as the financial windfall these might represent.  JAMA, NEJM, and Annals of Internal Medicine all declined to provide data, so these authors were left with the Lancet and the BMJ family of journals.  Of the most-frequently reprinted articles in these journals, they were far more likely to be industry-sponsored, and represented significant sources of income for the journals – up to a $2.4 million USD order from the Lancet.

There are significant limitations to this study, but, clearly, the revenue stream from reprints may be substantial enough that it may further influence and bias the publication of medical literature.

“High reprint orders in medical journals and pharmaceutical industry funding: case-control study”
http://www.bmj.com/content/344/bmj.e4212