Emergency Physicians are Mostly Just Hungry

Starving – but not for money – is probably the best way to sum up these descriptive statistics from CMS Open Payments.

This is simply a summary and breakdown of the 2014 calendar years’ worth of physician payments from industry, as reported to Open Payments.  This includes a range of details, from as mundane items as free snacks to large institutional grants for research.  All told, there were 46,405 payments to 12,883 emergency physicians, or approximately 1/3rd of all practicing EM physicians.  39,774 of these payments were related to “Food and Beverage”, and typically between $10 and $50.

So, most EM physicians are just hungry.

However, such food-related expenses were just 12.2% of the total pharmaceutical spend on EM physicians.  The two highest-grossing categories were 1,257 physicians with lecture fees and 833 with consulting fees, averaging over $1000 each.  The products associated with the greatest number and/or largest payments were antibiotics and anti-thrombotics, mostly: rivaroxaban, apixaban, dabigatran, ticagrelor, alteplase, ceftaroline, and linezolid.  Rather unexpectedly to me, at least, a few of the other top spots were taken up by agents for glycemic control: insulin detemir, canagliflozin, and liraglutide.

Finally, EM physicians were actually fairly low on the payout list, compared with other specialties, ranking 25th of 28th evaluated by this study.  The “winners”, if it weren’t already obvious from your hospital parking garage: orthopedic surgery, cardiology, endocrinology, allergy & immunology, gastroenterology, and urology.

“Financial Ties Between Emergency Physicians and Industry: Insights From Open Payments Data”

ClinicalTrials.gov Registration & Cleaning Up Primary Outcomes

It is an oft-repeated pseudoaxiom that half of your medical knowledge is wrong – we just don’t yet know which half.  This hyperbole is founded, in part, in the work of Ioannidis and examinations of refutations of clinical trials.  This study represents another log to toss onto our conflagaration of uncertainty.

These authors noted a recent analysis of large clinical trials funded by the National Heart, Lung, and Blood Institute showed these trials had mostly neutral results.  Neutral trials, of course, are still valuable – they frequently help inform efforts to prevent unnecessary or low-value care.  However, the prior analysis only evaluated studies published after year 2000, the year trials began being registered in ClinicalTrials.gov.  This new analysis extended the review window back to 1975, to see if this trend of predominantly null outcomes was a historical trend, or whether the requirement to publicly pre-specify a primary outcome might have had some effect.

Not terribly surprisingly, there is a relatively clear difference in the frequency of null outcomes in the post-ClinicalTrials.gov period compared with prior:

The authors’ quite reasonable conclusion: prior trials, not having the requirement to pre-specify a primary outcome, were probably more likely to retroactively promote a positive outcome as the primary outcome of a study.

Should all these primary outcomes prior to 2000 be re-tested?  Considering all these prehistoric trials noted ties to industry, it’s probably a fair suggestion – and, certainly, it fuels our further skeptical re-examination of established medical practices.

“Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time”
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0132382

Which Review of Tamiflu Data Do You Believe?

Ever since its introduction, there have been skeptics regarding the utility of oseltamivir and other neuraminidase inhibitors for the treatment of influenza.  Roche has profited tremendously off strategic stockpiling by many governments as a response to pandemic influenza – yet, nearly all the data comes from Roche-conducted trials, and the data has been persistently cloaked from independent review.  This past year, after much strife and public shaming, the Cochrane Collaboration received some access to clinical trial reports to conduct an independent review.  This review found, on average, adults receiving early treatment with oseltamivir benefited by reduction in symptom duration from 7 days to 6.3 days.  No benefit was found for reduction in respiratory infectious complications or hospitalization, the truly critical need during influenza outbreaks.

However, a second group also conducted an independent review – the “Multiparty Group for Advice on Science”.  Their results, based on an individual-patient meta-analysis, are published in the Lancet and offer similar – yet wildly different – conclusions.  They find, as did the Cochrane group, approximately a 17-hour reduction in symptoms in the intention-to-treat population across the eight Roche trials evaluated.

Similar to the Cochrane review, they perform secondary analyses for “lower respiratory tract infection”(e.g., bronchitis or pneumonia) and hospitalization, stratified by ITT and ITT-infected populations.  Most prominently emphasized are the results for the ITT-infected population, in which the antibiotics for LRTI were provided to 4.2% in the oseltamivir cohort, compared with 8.7% in placebo.  Likewise, 0.9% of patients were hospitalized for any cause compared with 1.7% of placebo.  The authors therefore conclude oseltamivir use decreased infectious complications of influenza.

These numbers, however, are entirely different from the Cochrane review.  The Cochrane review found a 1.4% hospital admission rate in the oseltamivir cohort and 1.8% in the placebo cohort.  Broken down by trial, the admit rates for the oseltamivir cohort in the MUGAS analysis compared with the Cochrane review:

  • M76001: 7/965 vs. 9/965
  • WV15670: 1/241 vs. 1/484
  • WV15671: 1/210 vs. 6/411
  • WV15707: 2/17 vs. 2/17
  • WV15812+: 6/199 vs. 9/199
  • WV15819+: 6/360 vs. 9/362
  • WV16277: 2/226 vs. 2/225

The differences in WV15670 and WV15671 appear to stem, at least in part, due to the MUGAS analysis being restricted to only trial patients taking 75mg twice daily, and not 150mg twice daily.  However, it is otherwise entirely unclear how the Cochrane group found extra hospitalizations in the other trials the MUGAS group did not – particularly considering the hospitalization numbers in the placebo cohorts were essentially identical.  Might it be partly a result of the MUGAS group receiving their data directly from a Roche web portal, while the Cochrane group reviewed the individual clinical study reports?

Rather, might it be revealing to pry into the genesis of the “Multiparty Group for Advice on Science”?  Is it an unbiased, independent clearinghouse for re-analysis of trial data?  Do they have a long track record of respected publications in multiple disciplines?  Unfortunately, neither of these conjectures are true – making it increasingly likely they are a puppet foundation fraught with conflict-of-interest.  MUGAS and the present work were funded by an unrestricted grant from Roche.  Furthermore, MUGAS, along with the European Scientific Working group on Influenza (ESWI), are projects of Semiotics, a scientific branding and communication company specializing in influenza.  The stated goal of Semiotics is promoting corporate science and ensuring its place on top of the policy agenda – and MUGAS is one of their “brands”.  This ought to very clearly demonstrate MUGAS is not a scientific enterprise, and rather an organization tasked with the sort of advocacy as best represents the needs of its sponsors.

Any bias might also be clear just in the style used to present results.  These authors present the tiny absolute differences in hospitalization and infectious complications in forest plot figures using only relative risk, rather than absolute risk.  This serves to inflate the apparent effect size.  Conversely, they present the increased incidence of adverse effects in a table culminating in adjusted absolute risk, with the opposite effect.  This manner of presentation persists in their Discussion, highlighting a “significant 63% reduction in risk of hospitalization”, compared with “absolute increases of 3.7% for nausea and 4.7% for vomiting.”

So – the results of an analysis performed by a “brand”, highlighting results discordant with a prior unbiased analysis.  Where is the peer review vetting such discrepancies?  With so many professional reputations and so much revenue at stake – which report do you believe?

“Oseltamivir treatment for influenza in adults: a meta-analysis of randomised controlled trials”
http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(14)62449-1/abstract

Additional editorial content:
“The BMJ Today: The FDA and CDC’s disagreement over Tamiflu, and the spy who isn’t”
http://blogs.bmj.com/bmj/2015/02/05/the-bmj-today-the-fda-and-cdcs-disagreement-over-tamiflu-and-the-spy-who-isnt/

The Remarkable Power of Placebo

Have you ever felt pressured to provide a patient with something at the time of discharge?  Something, anything to ease their suffering for an illness of unalterable benign progression?  Never?  You cold-hearted bastard.

This tiny trial, despite its small size, provides yet another beautiful look at the magical healing power of placebo.  Or, more accurately, rather than healing power, at least the satisfying power.  After all – most families in the ED at two in the morning are not there because their child is awake from coughing, but because the parents are.

This trial, in the same vein of several honey trials before it, compared no treatment, a placebo treatment (grape-flavored water), and agave nectar for the treatment of pediatric cough-related illness.  Agave nectar was chosen for its similarity to honey, while not carrying the hypothetical botulism risk.  With ~40 patients in each group, all patients improved during the duration of the study.  However, despite the small sample, both agave nectar and placebo provided durable advantage over no treatment across all surveyed measures of patient and parent comfort.  There was, however, no difference between placebo and agave nectar – or, if there was, it was too small to be detected in this study.

What is most remarkable about this study is the authors discussion – that placebo treatments could be considered ethical, even when no benefit to such treatment is found in controlled studies.  Providing patients – or parents – with, at least, an inexpensive, harmless treatment option takes advantage of the power of belief, to the extent that real patient/parent-oriented benefits may be observed.

Unfortunately, the lead author of this study was a paid consultant to Zarbee’s Inc (a maker of “natural” over-the-counter remedies) at the time the study was initiated, and Zarbee’s provided funding for the study.  But, thanks to their contribution to science, we now know they only work as well as you expect ….

“Placebo Effect in the Treatment of Acute Cough in Infants and Toddlers”
http://www.ncbi.nlm.nih.gov/pubmed/25347696

by Darren Cullen
Copyright Darren Cullen – Spellingmistakescostlives
Homeopathic Accident & Emergency

Scientific Writing is a Tragicomedy! Destroy!

Modern scientific writing – both in the exercises of writing and reading – is obtuse and uninviting.  Rather than clearly communicate an unbiased reflection of the conduct and findings of a particular study, the medical literature most commonly succeeds in doing the opposite.  After all, how else would I find enough to complain about on this blog?

This editorial elucidates so many joyfully preposterous notions it cannot help yet be loved.  It is best described as a no holds-barred cagematch versus all the inane pageantry of scientific writing.  Just a few of the gems, paraphrased:

  • Don’t let the authors write the abstract; they’ll just misrepresent the study!
  • Delete the introduction; uninsightful filler.
  • No one cares the brand and manufacturer of the statistical package used.
  • Unequal composite end-points and subgroup analyses should be banished.
  • The discussion section only serves authors’ purposes of dubious claims through selective reporting and biased interpretation of their results.

Some elements of this brief report are, indeed, novel.  Others are simply accepted best practices long since forgotten.  Regardless, it is a refreshing reminder of how brutally poorly the current medical literature serves effective knowledge translation.

“Ill communication: What’s wrong with the medical literature and how to fix it.”
http://www.ncbi.nlm.nih.gov/pubmed/25145940

Who Loves Tamiflu?

Those who are paid to love it, by a wide margin.

This brief evaluation, published in Annals of Internal Medicine, asks the question: is there a relationship between financial conflicts-of-interest, and the outcomes of systematic reviews regarding the use of neuraminidase inhibitors for influenza?  To answer such a question, these authors reviewed 37 assessments in 26 systematic reviews, published between 2005 and 2014, and evaluated the concluding language of each as “favorable” or “unfavorable”.  They then checked each author of each systematic review for relevant conflicts of interest with GlaxoSmithKline and Roche Pharmaceuticals.

Among those systematic reviews associated with author COI, 7 of 8 assessments were rated as “favorable”.  Among the remaining 29 assessments made without author COI, only 5 were favorable.  Of the reviews published with COI, only 1 made mention of limitations due to publication bias or incomplete outcomes reporting, versus most of those published without COI.

Shocking findings to all frequent readers, I’m sure.

“Financial Conflicts of Interest and Conclusions About Neuraminidase Inhibitors for Influenza”
http://www.ncbi.nlm.nih.gov/pubmed/25285542

Original link in error … although, it’s a good article, too!
http://www.ncbi.nlm.nih.gov/pubmed/24218071

Why Should Patients Be Denied Access to Results?

Interestingly, even though patients have long had full access to their medical record – laborious as it might be to obtain – just recently the Department of Health and Human Services issued a ruling stating patients could directly obtain their laboratory results from medical laboratories.

The real question – why did this take so long?  Are lab results, excepting a few exceptional cases, truly dangerous?  And, are patients not essentially the true owner of their medical testing?  Certainly, where the patient has directly purchased diagnostics, there should be no obstacle in providing them with information.  One can make a technical case that, when the results are purchased by a third-party – government or health insurance provider – those entities are the true owner of the result, but that hardly holds up to ethical scrutiny.

Regardless, this step is just one part of what is increasingly a growing movement towards transparency in medicine.  I think this is a good thing – patients ought be involved at each step of testing and interpretation, and, ideally should be provided with their results in real time.  It is outdated paternalism in medicine patients cannot be trusted with their own test results; is it reasonable to expect patients are a “threat” to themselves by using alternative information sources to self-educate?  If this is truly so, the answer is not to hide the results – but rather to improve how we as professionals educate patients, and improve the lines of communication.

“Direct-to-Patient Laboratory Test Reporting”
http://jama.jamanetwork.com/article.aspx?articleid=1882585 (free fulltext)

Failing Our Profession Through Futile Care

It is not always feasible to serve all masters in medicine.  From a resource utilization standpoint, unfortunately, one missed opportunity is with regard to how we approach futile care.  We have all experienced the care of a patient who, regardless of testing and therapy, has zero chance of meaningful recovery.  To terminate care for these patients sometimes requires difficult conversations, and can snowball out of control with adverse legal and public relations consequences.

But, as this report from UCLA and RAND details, our failures to properly address futile care and end-of-life issues result in direct downstream harms to other patients.  These authors surveyed ICU physicians each day across 5 different ICUs, enquiring as to whether any of the patients under their care were receiving futile treatment.  Overall, 1,136 patients over 3 months were assessed – with 123 reported to be receiving futile treatment.  On 72 days during the survey period, an ICU was full and providing futile care – and these periods of ICU capacity resulted in 33 patients boarding >4 hours in the Emergency Department, 9 patients waiting >1 day to transfer in from an outside hospital, and 15 additional transfer requests being cancelled after waiting >1 day.  Two patients died while awaiting transfer during times in which an ICU was at capacity while a patient was receiving futile care.

While this is just a single-center experience, I am certain we have all experienced ED boarding or transfer difficulties as a result of ICU capacity.  These patients are subject to proven harms due to delays in care, and, as such, I agree with the authors’ conclusion:

“It is unjust when a patient is unable to access intensive care because ICU beds are occupied by patients who cannot benefit from such care….The ethic of “first come, first served” is not only inefficient and wasteful but it is also contrary to Medicine’s responsibility to apply healthcare resources to best serve society.”

“The Opportunity Cost of Futile Treatment in the ICU”
http://www.ncbi.nlm.nih.gov/pubmed/24810527

Just Another Advertisement for tPA

As with last week’s coverage of the updated Cochrane Systematic Review for tPA in acute ischemic stroke, the key question is: what’s new?

The first pooled meta-analysis, published in The Lancet in 2004, included NINDS, ECASS I, ECASS II, and ATLANTIS.  It was subsequently updated in 2010 to add ECASS III and EPITHET.  Now, these authors have decided to add IST-3.

I am actually a huge fan of individual-patient meta-analyses.  Depending on the data availability, the similarity of trial protocols, and other issues associated with heterogeneity, this is the gold-standard for aggregating data and increasing power.  Individual-patient analyses also allow for more reliable exploration of subgroup effects not otherwise possible through regular meta-analyses or systematic reviews.

But, at the crux of it, a meta-analysis is only as good as the included trials – and this is a topic much debated over the last twenty years.  Entertainingly, the 2014 publication includes this bland statement:

Role of the funding source 

The funders had no role in study design, data collection, data analysis, data interpretation, or writing of the report. The corresponding author had full access to all the data and responsibility for the decision to submit for publication.

Yes, the funding source had nothing to do with the study design, excepting all the folks receiving speaker fees and honoraria – and the fact the original idea and refinements to the approach were contributed by one of the authors who is an employee Boehringer Ingelheim:

KRL has received speaker fees from and has served on the data monitoring committee of trials for Boehringer Ingelheim; his department has received research grant support from Genentech.  GA has received research grant support from Lundbeck, fees for consultancy and advisory board membership from Lundbeck, Covidien, Codman, and Genentech, fees for acting as an expert witness, and owns stock in iSchemaView. EB is employed by Boehringer Ingelheim. SD has received honoraria from Boehringer Ingelheim, EVER Pharma, and Sanofi and has received fees for consultancy and advisory board membership from Boehringer Ingelheim and Sanofi. GD has received research grant support from the NHMRC (Australia) and honoraria from Pfizer and Bristol-Myers Squibb. JG has received fees for consultancy and advisory board membership from Lundbeck. RvK has received speaker fees and honoraria from Penumbra and Lundbeck. RIL has received honoraria from Boehringer Ingelheim. JMO has received speaker fees from Boehringer Ingelheim. MP has received travel support from Boehringer Ingelheim. BT has received honoraria from Pfizer.  DT has received speaker fees and fees for consultancy and advisory board membership from Boehringer Ingelheim and Bayer.  JW has received research grant support from the UK Medical Research Council and from Boehringer Ingelheim to the University of Edinburgh for a research scanner bought more than 10 years ago. WW has received research grant support from the UK Medical Research Council. PS has received honoraria for lectures which were paid to the department from Boehringer Ingelheim. KT has received research grant support from the Ministry of Health, Labour, and Welfare of Japan, and speaker fees from Mitsubishi Tanabe Pharma.  WH has received research grant support from Boehringer Ingelheim, and speaker fees and fees for consultancy and advisory board membership from Boehringer Ingelheim.

The same level of COI was present in previous versions – including employees of the sponsor as authors – but, interestingly, at least the 2004 version explicitly acknowledges a critical issue:

Role of the funding source 

For the ATLANTIS trials, Genentech provided full support for the study and Genentech employees participated to some extent in study design, data collection, data analysis, and data interpretation, writing of the report, and in the decision to submit the manuscript for publication. For the ECASS trials, Boehringer Ingelheim provided full support. Employees of Boehringer Ingelheim participated in study design, in data collection, data analysis, data interpretation, writing of the report, and in the decision to submit the report for publication.

Nothing has changed.  If you trusted the data then, you trust the data now – and vice-versa.

So, what is new?  If anything, what’s new is worse than preceded it.  The authors have nearly doubled the cohort for analysis – by the inclusion of a decade-long trial crippled by the bias introduced by an open-label, mostly unblinded design.  Despite the massive resources invested in conducting it, unfortunately, IST-3 is too flawed for inclusion – due to the unfortunate likelihood any small positive signals regarding tPA are certain to be exaggerated.  And, simply put, that’s where the astute reader ought to stop reading this publication.  There’s no point in trying to interpret their results, to fuss over the heterogeneity between trials, missing baseline characteristics for their many subgroup analysis, or whether the trials stopped early for harm or futility – ATLANTIS – are properly acknowledged.  The authors also omit several planned secondary analyses described in their statistical protocol – although, considering the garbage-in/garbage-out nature of this work, it’s of debatable importance.

The last decade of prospective research – ECASS III and IST-3 – has done nothing but degrade the quality of evidence describing tPA in acute ischemic stroke.  If there is, indeed, anyone left on the fence regarding the pro/con tPA debate, this effort ought move the needle zero to none.  Very early treatment with tPA probably benefits a properly selected subset of patients with acute ischemic stroke.  The rest – whether increasing age, high-or-low NIHSS, specific stroke syndromes, or time-dependent factors – have much smaller, if any, chance of benefit exceeding chance of harm.  Until we have unbiased evidence, we’ll never truly know how to best select patients for this therapy – and neurologists will continue to lament low treatment rates, while Emergency Physicians continue to reject pro-tPA clinical policies.  Only new, independent data has a chance to substantially change our approach to acute ischemic stroke.

“Effect of treatment delay, age, and stroke severity on the effects of intravenous thrombolysis with alteplase for acute ischaemic stroke: a meta-analysis of individual patient data from randomised trials”
http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(14)60584-5/abstract

Nocebo Effects, the Dark Side of Placebo

We easily appreciate the placebo effect – the simple expectation of a treatment’s success positively affects its efficacy.  To prove a new treatment’s utility, then, we compare it against a placebo – a sham with the same expectation of success – to reveal a true magnitude of benefit (or harm).

However, much less appreciated is the flip side: nocebo effects.  I.e., if  patient expects to have adverse effects from a treatment, they are more likely to do so.  This has implications for clinical trials, of course, but also for discontinuation of therapy in general practice.  For example, consider those lovely pharmaceutical commercials, showing happy couples skydiving, in bathtubs, or otherwise living faux healthy lives – while simultaneously providing the droning voice-over detailing a litany of dire, disabling side effects.  Each mention of adverse outcome increases the likelihood a patient will perceive or experience it, and thereby potentially harm patients through decreased adherence to otherwise beneficial treatment.

Nocebo – Darren Cullen (2012)

These authors review the causes and implications of nocebo effects, and have several recommendations regarding effective strategies to minimize nocebo effects.  My favorite, by far:

“Refer to web-based and other information systems that provide evidence-based information, instead of unproven, anxiety-increasing comments.”

Ah, yes – you mean, basically, the entire Internet: insane, uninformed, anecdotal.  Good luck with that.

“Avoiding Nocebo Effects to Optimize Treatment Outcome”
http://www.ncbi.nlm.nih.gov/pubmed/25003609