Just Another Advertisement for tPA

As with last week’s coverage of the updated Cochrane Systematic Review for tPA in acute ischemic stroke, the key question is: what’s new?

The first pooled meta-analysis, published in The Lancet in 2004, included NINDS, ECASS I, ECASS II, and ATLANTIS.  It was subsequently updated in 2010 to add ECASS III and EPITHET.  Now, these authors have decided to add IST-3.

I am actually a huge fan of individual-patient meta-analyses.  Depending on the data availability, the similarity of trial protocols, and other issues associated with heterogeneity, this is the gold-standard for aggregating data and increasing power.  Individual-patient analyses also allow for more reliable exploration of subgroup effects not otherwise possible through regular meta-analyses or systematic reviews.

But, at the crux of it, a meta-analysis is only as good as the included trials – and this is a topic much debated over the last twenty years.  Entertainingly, the 2014 publication includes this bland statement:

Role of the funding source 

The funders had no role in study design, data collection, data analysis, data interpretation, or writing of the report. The corresponding author had full access to all the data and responsibility for the decision to submit for publication.

Yes, the funding source had nothing to do with the study design, excepting all the folks receiving speaker fees and honoraria – and the fact the original idea and refinements to the approach were contributed by one of the authors who is an employee Boehringer Ingelheim:

KRL has received speaker fees from and has served on the data monitoring committee of trials for Boehringer Ingelheim; his department has received research grant support from Genentech.  GA has received research grant support from Lundbeck, fees for consultancy and advisory board membership from Lundbeck, Covidien, Codman, and Genentech, fees for acting as an expert witness, and owns stock in iSchemaView. EB is employed by Boehringer Ingelheim. SD has received honoraria from Boehringer Ingelheim, EVER Pharma, and Sanofi and has received fees for consultancy and advisory board membership from Boehringer Ingelheim and Sanofi. GD has received research grant support from the NHMRC (Australia) and honoraria from Pfizer and Bristol-Myers Squibb. JG has received fees for consultancy and advisory board membership from Lundbeck. RvK has received speaker fees and honoraria from Penumbra and Lundbeck. RIL has received honoraria from Boehringer Ingelheim. JMO has received speaker fees from Boehringer Ingelheim. MP has received travel support from Boehringer Ingelheim. BT has received honoraria from Pfizer.  DT has received speaker fees and fees for consultancy and advisory board membership from Boehringer Ingelheim and Bayer.  JW has received research grant support from the UK Medical Research Council and from Boehringer Ingelheim to the University of Edinburgh for a research scanner bought more than 10 years ago. WW has received research grant support from the UK Medical Research Council. PS has received honoraria for lectures which were paid to the department from Boehringer Ingelheim. KT has received research grant support from the Ministry of Health, Labour, and Welfare of Japan, and speaker fees from Mitsubishi Tanabe Pharma.  WH has received research grant support from Boehringer Ingelheim, and speaker fees and fees for consultancy and advisory board membership from Boehringer Ingelheim.

The same level of COI was present in previous versions – including employees of the sponsor as authors – but, interestingly, at least the 2004 version explicitly acknowledges a critical issue:

Role of the funding source 

For the ATLANTIS trials, Genentech provided full support for the study and Genentech employees participated to some extent in study design, data collection, data analysis, and data interpretation, writing of the report, and in the decision to submit the manuscript for publication. For the ECASS trials, Boehringer Ingelheim provided full support. Employees of Boehringer Ingelheim participated in study design, in data collection, data analysis, data interpretation, writing of the report, and in the decision to submit the report for publication.

Nothing has changed.  If you trusted the data then, you trust the data now – and vice-versa.

So, what is new?  If anything, what’s new is worse than preceded it.  The authors have nearly doubled the cohort for analysis – by the inclusion of a decade-long trial crippled by the bias introduced by an open-label, mostly unblinded design.  Despite the massive resources invested in conducting it, unfortunately, IST-3 is too flawed for inclusion – due to the unfortunate likelihood any small positive signals regarding tPA are certain to be exaggerated.  And, simply put, that’s where the astute reader ought to stop reading this publication.  There’s no point in trying to interpret their results, to fuss over the heterogeneity between trials, missing baseline characteristics for their many subgroup analysis, or whether the trials stopped early for harm or futility – ATLANTIS – are properly acknowledged.  The authors also omit several planned secondary analyses described in their statistical protocol – although, considering the garbage-in/garbage-out nature of this work, it’s of debatable importance.

The last decade of prospective research – ECASS III and IST-3 – has done nothing but degrade the quality of evidence describing tPA in acute ischemic stroke.  If there is, indeed, anyone left on the fence regarding the pro/con tPA debate, this effort ought move the needle zero to none.  Very early treatment with tPA probably benefits a properly selected subset of patients with acute ischemic stroke.  The rest – whether increasing age, high-or-low NIHSS, specific stroke syndromes, or time-dependent factors – have much smaller, if any, chance of benefit exceeding chance of harm.  Until we have unbiased evidence, we’ll never truly know how to best select patients for this therapy – and neurologists will continue to lament low treatment rates, while Emergency Physicians continue to reject pro-tPA clinical policies.  Only new, independent data has a chance to substantially change our approach to acute ischemic stroke.

“Effect of treatment delay, age, and stroke severity on the effects of intravenous thrombolysis with alteplase for acute ischaemic stroke: a meta-analysis of individual patient data from randomised trials”
http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(14)60584-5/abstract

Nocebo Effects, the Dark Side of Placebo

We easily appreciate the placebo effect – the simple expectation of a treatment’s success positively affects its efficacy.  To prove a new treatment’s utility, then, we compare it against a placebo – a sham with the same expectation of success – to reveal a true magnitude of benefit (or harm).

However, much less appreciated is the flip side: nocebo effects.  I.e., if  patient expects to have adverse effects from a treatment, they are more likely to do so.  This has implications for clinical trials, of course, but also for discontinuation of therapy in general practice.  For example, consider those lovely pharmaceutical commercials, showing happy couples skydiving, in bathtubs, or otherwise living faux healthy lives – while simultaneously providing the droning voice-over detailing a litany of dire, disabling side effects.  Each mention of adverse outcome increases the likelihood a patient will perceive or experience it, and thereby potentially harm patients through decreased adherence to otherwise beneficial treatment.

Nocebo – Darren Cullen (2012)

These authors review the causes and implications of nocebo effects, and have several recommendations regarding effective strategies to minimize nocebo effects.  My favorite, by far:

“Refer to web-based and other information systems that provide evidence-based information, instead of unproven, anxiety-increasing comments.”

Ah, yes – you mean, basically, the entire Internet: insane, uninformed, anecdotal.  Good luck with that.

“Avoiding Nocebo Effects to Optimize Treatment Outcome”
http://www.ncbi.nlm.nih.gov/pubmed/25003609

The Scandal of Dabigatran – A Summary

We’ve been desperate for a more elegant solution to anticoagulation than rat poison for seemingly an eternity.  Now, we have them: direct thrombin and factor Xa inhibitors.  The studies supporting their use seem favorable.

But, as the old story goes – and as previously reported on this blog many times – Boehringer Ingelheim has been selectively reporting only the most favorable aspects of their flagship drug, dabigatran.  Increased cardiovascular events have been downplayed through study design not powered to detect a difference.  Issues with fixed dose therapy – and lack of a range of options for patients with renal impairment – rear their ugly head in multiple case reports.

Then, the most damning – the recent legal action reveals Boehringer Ingelheim, after selling dabigatran as not requiring monitoring nor having a reliable assay to monitor its effects, was hiding information on both counts.  There is, in fact, substantial individual-patient variability in dabigatran efficacy and bleeding risk, and the HEMOCLOT test is, in fact, a reliable method of measuring activity.  Review of internal documents shows employees were aware many patients might benefit from routine monitoring of levels – but this would eliminate one of its selling points (and cost savings) over warfarin.  These e-mails also specifically address the potential damaging effect on sales if said information were released in the scientific literature.

Clearly, yet another case where first-mover status into a lucrative market trumped patient-safety concerns.  If you wonder where the rampant skepticism regarding conflict-of-interest comes from on this blog – this is a beautifully flagrant example.

“Dabigatran: how the drug company withheld important analyses”
http://www.bmj.com/content/349/bmj.g4670 (free fulltext)

Previous EM Lit of Note Posts:
Rivaroxaban Can Be Reversed, But Not Dabigatran” – Sept 2011
Scattering Tacks In The Road” – Jan 2012
Dabigatran — Uncharted Waters and Potential Harms” (Annals of Internal Medicine) – May 2012
Dabigatran – It’s Everywhere!” – Sept 2012
Not-So Routine Surgery on Dabigatran” – Sept 2012
Dabigatran: Hidden Danger in the Home” – Nov 2012
Dabigatran & CES1 SNP rs2244613” – Mar 2013

Should We Keep Patients in the Dark on Costs?

That seems to be the overwhelming opinion of folks interviewed for this recent News & Perspective from Annals of Emergency Medicine.

Citing everything from ignorance, to the Emergency Medical Treatment and Labor Act, several clinicians in this vignette make the case discussions of cost have no role in Emergency Department care.  Victor Friedman, of the ACEP board of directors says costs are “irrelevant to me as a provider …. The billing and all that stuff comes later.”  Ellis Weeker from CEP America is concerned any discussion of costs might influence decisions regarding whether patients are seen, and potentially represent an EMTALA violation.

On the other hand, Neal Shah of Costs of Care points out there are real patient harms secondary to the financial burdens of healthcare, in no small part because of the astounding charges meted out from the Emergency Department.  While patients rarely see (or pay) the fantasy prices on the chargemaster, the burdens of even a fraction of these costs may mean choosing between food and insulin, or heat and clopidogrel.

If you’ve seen my writing to this effect, I fall squarely on the side of “costs should be communicated”, within reason.  I agree with Dr. Shah, that many Emergency Department interactions are “urgent” rather than “emergent”, and there is time to include costs as an adverse effect of a test or therapy.  I look forward to the communication instrument his team is developing.

“Price Transparency in the Emergency Department”
http://www.sciencedirect.com/science/article/pii/S0196064414004211

The Struggles of tPA Consent

Providing informed consent for any therapeutic intervention can be challenging.  And, then there’s stroke.  Acute stroke spans the gamut from mildly limiting to profoundly disabling – with a non-linear relationship between the NIHSS and disability.  There are folks with NIHSS score of 4 who can walk into the Emergency Department, and there are folks with the same score or lower who are functionally incapacitated.  All this means it’s a struggle to provide an individualized estimate of the benefits, risks and alternatives in consent for tPA.

This is a lovely, short, qualitative survey of a handful of (mostly) neurology consultants in the United Kingdom, asking a few questions regarding the diagnostic process, shared decision-making, and consent for thrombolysis.  Not all consultants surveyed seemed to appreciate the challenges, but others recognized limitations in the data, as well as how difficult it made informed consent:

“I think there needs to be em, err a minimum standard, standardised information available based on what you believe is the right interpretation of the trial. We have to remember that this is based on em, err limited number of randomised trials ….. This is a particularly heterogeneous disease it cannot be applied to a single patient, I think the predictions em in model could be designed but again I don’t think it can be predicted for an individual group of pa-, individual patients so we believe these are the kind of risks and benefits but you know it cannot be predicted to the individual patient.”

Other physicians commented upon the challenges of making a rapid, certain diagnosis, and the inadequate demands made upon patients and families to choose in a time-compressed setting.  Overall, it’s an interesting little read.

“Risk communication in the hyperacute setting of stroke thrombolysis: an interview study of clinicians”
http://emj.bmj.com/content/early/2014/05/16/emermed-2014-203717.short

tPA: We Don’t Need No Stinkin’ Consent!

Yes, this the brave future imagined by pro-tPA colleagues:  there are neurologists in a van down by the river, and they’ll drive right to your house and give you tPA – without your consent!

This is a research letter from JAMA, in which researchers from UCSF performed a survey of patient preferences through an online cohort representative of the adult U.S. population over 50 years of age.  These authors, as they would lead you to believe, asked participants to compare their desire to receive CPR after cardiac arrest with their desire to receive tPA after a stroke.  75.9% of surveyed participants wanted CPR and 76.2% wanted tPA.  Therefore, these authors conclude:

“… there are equally strong empirical grounds for presuming individual consent to thrombolysis for stroke as for presuming individual consent to CPR.”

I am not an ethicist, so I’m unable to precisely articulate how odd this comparison is at face value.  Would any therapy patients would choose 75% of time mean we ought to presume consent?  Is CPR the “gold standard” for emergency consent?  There are interesting questions regarding how this data ought to be interpreted in the context of emergency consent I’m not qualified to answer.

However, I am qualified to comment on their methodology, i.e., the best way to get the answer you want: ask a question in such a way they’ll answer how you intend.  How did they ask patients if they wanted CPR?  They showed them a “depiction of probabilistic outcomes after paramedic-initiated CPR”.  This depiction is not provided in the text, only a reference to an article they used to make it.  For tPA?  They used a graphical depiction of the benefits of tPA from this article.  They do not specify which graphical depiction they used, but the final product of the previous pro-tPA physicians was this, Figure 3:

With a graphic like this, is it any wonder the patients surveyed were amenable to tPA?  Interestingly, the authors who created the graphical depiction state this graphic “complements the numeric text of a national patient education tool developed jointly by US neurology, emergency medicine, and stroke patient organizations.”  The link in their citations is broken, but I have found a reproduction here, which contains the following fantastic isolated quote:

“If given promptly, 1 in 3 patients who receive tPA resolve their symptoms or have major improvement in their stroke symptoms.”

It boggles the mind ACEP was complicit in approving this horrible flyer.  As you can now see, this seemingly trivial document has since catastrophically mutated into the terrifying basis of giving tPA without informed consent.

“Testing the Presumption of Consent to Emergency Treatment for Acute Ischemic Stroke”
http://jama.jamanetwork.com/article.aspx?articleid=1861784

The Whole Truth, and Anything But

The publication of clinical trials in high-impact journals represents one of the most effective forms of knowledge translation for new medical evidence.  Three of these journals – JAMA, the New England Journal, and the Lancet – perennially rank among the highest-impact.  As I’ve mentioned before, these journals have a higher responsibility to society at large to maintain scientific integrity, as most readers therefore accept the authors presented results and conclusions at face-value.

However, clinical trials are also required to report their results on ClinicalTrials.gov.  These authors review one year’s worth of clinical trials published in the three aforementioned journals and compared the high-impact results with those stashed away on ClinicalTrials.gov.  Of 91 trials identified, 156 primary and co-primary endpoints were identified, but only 132 were described in both sources – and only 61% were concordant between each source.  Of 2,089 secondary endpoints, 619 were described in both sources – and were only 55% concordant.

Furthermore, the authors identified six studies with primary outcomes noted on ClinicalTrials.gov resulting in alternative trial interpretation.  These included changes in disease resolution or progression time, as well as results that achieved statistical significance in publication, but not on ClinicalTrials.gov.

The authors conclude:

“…possible explanations include reporting and typographical errors as well as changes made during the course of the peer review process …. journal space limitations and intentional dissemination of more favorable end points and results in publications.”

We ought to expect better vetting of results by journal editors – particularly from sources frequently followed by the lay media.

“Reporting of Results in ClinicalTrials.gov and High-Impact Journals”
http://www.ncbi.nlm.nih.gov/pubmed/24618969

There’s No Telling What Patients Want

“Shared decision-making” has become a frequent watchword of sorts, encompassing participatory concepts in which patients are better involved in their own care.  I, and many others, have espoused this sort of paradigm in medicine.

Unfortunately, there’s a bit of a problem.  On the physician side, we probably don’t have good mechanisms through which to translate evidence to individual patients.  Most information derived from clinical studies describes outcomes from aggregated cohorts – so, usually, the best we can do is inform our patients how the “average” person performed with a specific treatment.

Then, on the patient side – as this study demonstrates – their risk-taking behavior is heterogenous, irrational, and extreme.  These authors report on 234 surveys of patients presenting with low-acuity chest pain in a Veterans Affairs cohort, trying to get a handle on hospitalization preferences given a certain pretest likelihood of disease.  Their basic model:  hospitalization reduces the risk of bad outcome by 10%.  Then, they asked if the patient would like to be hospitalized for base likelihood of poor outcomes ranging from 1 in 2 to 1 in 10,000.

Half the patients wanted to be hospitalized, even when the benefit to hospitalization reduced the event rate from 1 in 10,000 to 1 in 11,000 (an NNT of 110,000).  Then, another 10% of patients wanted to be discharged in all circumstances, even when the risk of poor outcome was improved from 1 in 2 to 5 in 11 (an NNT of 22).  And, depending on how the risks were communicated, and whether visual or numeric scales were used, also affected how the patients chose.

So, ultimately – yes, we’d like to involve patients in their decisions.  But, unfortunately, it looks as though it’s going to be quite the challenging proposition – and we might not like (or have the capacity to abide by) their preferences.

“Measuring Patient Tolerance for Future Adverse Events in Low-Risk Emergency Department Chest Pain Patients”
http://www.ncbi.nlm.nih.gov/pubmed/24530111

Open the Data

There is a committee within the Institute of Medicine charged with examining the issues associated with data sharing after randomized controlled trials.

Data-sharing, without question, reflects a dire need.  From companies behaving badly – such as Merck with Vioxx, or Roche with Tamiflu – to inadvertent errors in analysis, protecting the health of patients requires more than simple peer review of documents prepared for pharmaceutical corporations by medical communication professionals.

Jeff Drazen, in this editoral, makes a call for feedback to the IOM.  Oddly, his main concern is – how long ought the original authors of a study be allowed exclusive access to trial data?  Would open data disincentivize researchers to perform clinical investigations, knowing their academic and commercial benefit would likely be curtailed?  On the flip side, we have seen publication of trial data be massively delayed – see ATLANTIS Part A, withheld for seven years – by pharmaceutical companies concerned with protecting their business interests.

It is a complicated and subtle issue, to be sure, but appropriate transparency is almost certainly an improvement over the current situation.  Full details, and how to leave feedback, are at:
http://www.iom.edu/activities/research/sharingclinicaltrialdata.aspx

“Open Data”
http://www.nejm.org/doi/full/10.1056/NEJMe1400850 (open access)

FDA: The Black Knight

… specifically, the Black Knight from Monty Python, apparently reduced to nibbling impotently at the feet of pharmaceutical corporations as they sail through the approval process.

This study in JAMA reviews the characteristics of novel therapeutics approved by the FDA between 2005 and 2012.  These authors identified 188 novel agents approved for 206 indications, and describe an entire host of fascinating data regarding the trials supporting approval.  A few of the most damning pearls:

  • 37% of indications were approved on the basis of a single trial.
  • The median number of patients per trial was 760.
  • 49% of trials used only surrogate outcomes.
  • Surrogate outcome trials constitute sole basis of approval for 91 indications.
  • Only 48% of cancer trials were randomized, and only 27% were double-blinded.
  • 40 trials were part of accelerated approval, 39 of which used surrogate outcomes, with a median number of patients of 157.

The data go on and on.

Considering many landmark trials could not be independently reproduced, even with the help of the original researchers; most published research findings are false; and half of what you know is wrong – we might as well just dump poison in our water supply.  It’s cheaper than suffering the next blockbuster drug for which pharmaceutical companies engineer an indication through distorted trial design.

“Clinical Trial Evidence Supporting FDA Approval of Novel Therapeutic Agents, 2005-2012“
http://jama.jamanetwork.com/article.aspx?articleid=1817794 (open access)

“Author Insights: Quality of Evidence Supporting FDA Approval Varies”
http://newsatjama.jama.com/2014/01/21/author-insights-quality-of-evidence-supporting-fda-approval-varies/