Insight Is Insufficient

In this depressing trial, we witness a disheartening truth – physicians won’t necessarily do better, even if they know they’re not doing well.

This study tested a mixed educational and peer comparison intervention on primary care physicians in Switzerland, with an end goal of improving antibiotic stewardship for common ambulatory complaints. The “worst-performing” 2,900 physicians with respect to antibiotic prescribing rates were enrolled and randomized to the study intervention or none. The study intervention consisted of materials regarding appropriate prescribing, along with personalized feedback regarding where their prescribing rate ranked compared to the entire national cohort. The core of their hypothesis involved whether just this passive knowledge regarding their peer performance would exert normalizing influence over their practice.

Unfortunately, despite providing these physicians with this insight, as well as tools for improvement, the net effect of their intervention was effectively zero. There were some observations regarding changes in prescribing rates for certain age groups, and for certain types of antibiotics, but dredging through these secondary outcomes leads to only unreliable conclusions.

This is not particularly surprising data. These sorts of passive feedback mechanisms unhitched from material consequences have never previously been shown to be effective. There are other, more effective mechanisms – focused education, decision-support interventions, and shared decision-making – but, for a fragmented, national health system, this represented a relatively inexpensive model to test.

Try again!

“Personalized Prescription Feedback Using Routinely Collected Data to Reduce Antibiotic Use in Primary Care”

https://www.ncbi.nlm.nih.gov/pubmed/28027333

The Downside of Antibiotic Stewardship

There are many advantages to curtailing antibiotic prescribing. Costs are reduced, fewer antibiotic-resistant bacteria are induced, and treatment-associated adverse events are eliminated.

This retrospective, population-based study, however, illuminates the potential drawbacks. Using electronic record review spanning 10 years of general practice encounters, these authors compared infectious complication rates between practices with low and high antibiotic prescribing rates. Spanning 45.5 million person-years of follow-up after office visits for respiratory tract infections, there is both reason for reassurance and reason for further concern.

On the “pro” side, cases of mastoiditis, empyema, bacterial meningitis, intracranial abscess and Lemierre’s syndrome were no different between those who prescribed high rates (>58%) and those with low rates (<44%). However, there is a reasonably clear linear relationship with excess follow-up encounters for both pneumonia and peritonsilar abscess. Incidence rate ratios were 0.70 compared with reference for pneumonia and 0.78 compared with reference for peritonsillar abscess. However, the absolute differences can best be described as “large handful” and “small handful” of extra cases per 100,000 encounters

There are many rough edges and flaws relating to these data, some of which are probably adequately defeated by the massive cohort size. I think it is reasonable to interpret this article as accurately reflecting true harms from antibiotic stewardship. More work should absolutely be pursued in terms of strategies to mitigate these potential downstream complications, but I believe the balance of benefits and harms still falls on the side of continued efforts in stewardship.

“Safety of reduced antibiotic prescribing for self limiting respiratory tract infections in primary care: cohort study using electronic health records”

http://www.bmj.com/content/354/bmj.i3410

How Many ED Visits are Truly Inappropriate?

I’ve seen quite a bit of feedback on social media regarding this research letter in JAMA Internal Medicine.

This study evaluated, using National Hospital Ambulatory Medical Care Survey data, the incidence of hospital admission stratified by triage Emergency Severity Index.  They analyzed 59,293 representative visits from the sample and found 7.5% of them, on a weighted basis, were categorized as “non-urgent” – an ESI level 5 or presumed equivalent.  The typical assumption regarding these non-urgent visits is they represent inappropriate Emergency Department utilization.  This study found, however:

“… a nontrivial proportion of ED visits that were deemed nonurgent arrived by ambulance, received diagnostic services, had procedures performed, and were admitted to the hospital, including to critical care units.”

There are always limitations regarding the NHAMCS data, particularly with missing and imputed data.  Based on this, I tend to feel these data lack face validity.  The weighted incidence of admission for non-urgent patients was 4.4% compared with 12.8% of urgent visits, while 0.7% of non-urgent visits were to critical care units compared with 1.3% of urgent visits.  I certainly do not see similar relative proportions of admission, and then to critical care, for level 5 patients in my multiple practice environments.

Regardless, the general implication made by these authors is probably reasonable, refuting usage of ESI triage level 5 to accurately represent inappropriate Emergency Department visits.  However, left equally unstated, is an acknowledgement that ESI also fails to accurately categorize urgent visits – which ties to the rhetoric of trying to conflate “non-urgent” as “inappropriate and “urgent” as “appropriate”.

ESI, as currently implemented, will not be a reliable tool for directing patients to other sources of care – but, with some fuzziness, probably still gives a reasonable estimate of the overall burden of inappropriate ED visits for some policy applications.

“Urgent Care Needs Among Nonurgent Visits to the Emergency Department”
https://www.ncbi.nlm.nih.gov/pubmed/27089549

Too Many Tests! Or, So We Believe ….

Yes, Virginia, we order too many tests.  And, we know it – as evidenced by such conferences on overdiagnosis and costs of care.  And, even more relevant than such academic exercises, as this study indicates, even the general clinician seems to have a fair bit of self-awareness.

In this survey consisting of 435 respondents, 85% of emergency physicians believed excessive testing occurred in their Emergency Department.  Most frequently, such testing was motived by fear of missing even rare diagnoses, but defensive medicine and malpractice came a close second.  Patient expectations, local practice patterns, and time saving were also substantially cited as motivators for ordering.  Thankfully, administrative and personal motivations to increase reimbursement were rarely reported as reasons.

Despite the protestations of some policy-makers, the clinicians surveyed believed the most helpful change to the system would be malpractice reform.  Interestingly, the next ranked helpful interventions included educating patients and increasing shared decision-making.  While the first item may be logistically (or politically) unachievable, there are no obstacles to integrating improved communication behaviors into routine practice.  It does, however, show a need for increased availability of tools for clinicians to use at the point of care.

There are flaws in these sorts of perception-based surveys with regard to the accuracy of such anecdotal self-assessment.  Physician assessment of their own practice and that of others can certainly be questioned.  It must be admitted, however, a more intensive just-in-time surveying method would likely impact the variables measured.

There are also some highly entertaining outliers in Figure 2, of course, the perception of self vs. colleague ordering.  There is a handful of physicians who believe they, themselves, order over 80% of their CTs and MRIs unnecessarily – but that no one else in their group does.  Likewise, there is a handful with just the opposite perception – that their colleagues over-order, while they, themselves rarely do.  I wonder if they work in the same department?

Regardless, first step is admitting you have a problem.  We have many steps yet to go.

“Emergency Physician Perceptions of Medically Unnecessary Advanced Diagnostic Imaging”
http://onlinelibrary.wiley.com/doi/10.1111/acem.12625/abstract

Sometimes, The Stick Doesn’t Work

Pressure ulcers, catheter-associated UTIs, central-line infections, and injuries from falls are all iatrogenic injuries associated with healthcare and hospitalization.  Fewer of all these events would be ideal.

Of course, since asking nicely isn’t much of a motivation for healthcare delivery systems to improve practice, Medicare had a different solution – non-payment.  In 2008, Medicare ceased allowing hospitals to claim higher severity diagnosis related group codes to account for costs incurred by eight “never event” complications.  Money, on the other hand, is a strong motivator for change.  This study tries to evaluate just how successful such a heavy stick is at influencing care delivery.

These authors looked at the National Database of Nursing Quality Indicators, counting reported ulcers, falls, CLABSI, and CAUTI occurring between 2006 and 2010.  The trends reported for each differ starkly.  For CLABSI and CAUTI, in the quarters leading up to CMS policy change, the prevalence of each was gradually increasing.  After 2008, however, both trends show abrupt and consistent reversal and downward movement.  For pressure ulcers and injurious falls, however, the prevalence was gradually decreasing at the time of CMS policy implementation, and the slope of the line after 2008 is consistent with that same gradual decline.

The authors go into the limitations of each data source, but, the general takeaway is likely still valid – some “never events” just aren’t consistently, systematically preventable.  There are concerted, teachable best-practices involved with decreasing CLASBI and CAUTI.  Fall prevention and pressure ulcer prevention, on the other hand, are less amenable to care bundles, and seem to depend on gradual cultural changes and vigilance.  Thus, while outcomes-focused quality improvement using a financial motivator, while a reasonable method to try, will probably have the greatest impact and yield where a validated, evidence-based strategy can be implemented.

“Effect of Medicare’s Nonpayment for Hospital-Acquired Conditions Lessons for Future Policy”
http://archinte.jamanetwork.com/article.aspx?articleid=2087876

The Wholesale Revision of ACEP’s tPA Clinical Policy

ACEP has published a draft version of their new Clinical Policy statement regarding the use of IV tPA in acute ischemic stroke.  As before, the policy statement aims to answer the questions:

(1) Is IV tPA safe and effective for acute ischemic stroke patients if given within 3 hours of symptom onset?
(2) Is IV tPA safe and effective for acute ischemic stroke patients treated between 3 to 4.5 hours after symptom onset?

Most readers of this blog are familiar with the mild uproar the previous version caused, and this revision opens by stating “changes to the ACEP clinical policies development process have been implemented, the grading forms used to rate published research have continued to evolve, and newer research articles have been published.”  Left unsaid, in presumably a bit of diplomacy, were the conflicts of interest befouling the prior work.  Notably absent from this work is any involvement from the American Academy of Neurology.

What’s new, with a new methodology-focused rather than conflicted-expert-opinion approach?  Most obviously, there’s a new Level A recommendation – focused on the only consistent finding across all tPA trials: clinicians must consider a 7% incidence of symptomatic intracranial hemorrhage, compared with 1% in the placebo cohorts.  The previously Level A recommendation to treat within 3 hours has been downgraded to Level B.  Treatment up to 4.5 hours remains Level B.  Finally, a new Level C recommendation includes a consensus statement recommending shared decision-making between the patient and a member of the healthcare team regarding the potential benefits and harms.

Most of the reaction on Twitter has been, essentially, a declaration of victory.  And, in a sense, it is certainly a powerful statement regarding the ability for like-minded patient advocates and evidence purists to coalesce through alternative media and initiate a major change in policy.  To critique this new effort is a bit of punishing the good for lack of manifesting perfect, but there are a number of oddities worth providing feedback to the writing committee:

  • The authors provide a curious statement:  “The 2012 IV tPA clinical policy recommendation to ‘offer’ tPA to patients presenting with acute ischemic stroke within 3 hours of symptom onset was consistent with other national guidelines. Unfortunately, the essence of the term ‘offer’ may have been lost to readers and has therefore been avoided in this revision.”  I rather find “offer” a lovely term, in the sense it expresses a cooperative process for proceeding forward with a mutually agreed upon treatment strategy.  Rather than discard the term, clarification might have been reasonable.
  • They mention ATLANTIS as Class III evidence with regard to the 3-4.5 hour question.  I can see how its classification may be downgraded given the multiple protocol revisions.  That said, its inability to find a treatment benefit in spite of extensive sponsor involvement ought be a more powerful negative weighting than currently acknowledged.  Given the biases favoring the treatment group in ECASS III (given a Class II evidence label), the cumulative evidence probably does not support a Level B recommendation for the 3-4.5 hour window.
  • One of my Australian colleagues in private communication brings up a small letter from Bradley Shy, previously covered on this blog, mentioning a statistical change to ECASS III.  This statement could acknowledge this post-publication correction and its implications regarding the aforementioned imbalance between groups.
  • The authors fail to acknowledge the heterogeneity of acute ischemic stroke syndromes and patient substrates, and the utter paucity of individualized risk or benefit assessment tools – in no small consequence of the small sample sizes of the few trials rated as Class I or Class II evidence.  This is a powerful platform with which to state clinical equipoise exists for continued placebo-controlled randomization.  As we see from the endovascular trials, the acute recanalization rate of IV tPA is as low as 40% – with many patients re-occluding following completion of the infusion.  Patients need to be selected less broadly with respect to likelihood of benefit compared with supportive care.  I believe tPA helps some patients, but it should be a goal to dramatically reduce the costs and collateral damage associated with rushing to treat mimics and patients without a favorable balance of risks and benefits.  For these authors to recommend treatment in “carefully selected patients” and “shared decision-making”, more guidance should be provided – and absent the evidence to support such guidance, they should be calling for more trials!

The comment period is open until March 13, 2015.

“Clinical Policy: Use of Intravenous tPA for the Management of Acute Ischemic Stroke in the Emergency Department DRAFT”
http://www.acep.org/Clinical-Policy-Comment-form-Intravenous-tPA/

Addendum 01/18/2015:
The SAEM EBM interest group is compiling comments on the evidence for feedback to the SAEM board of directors.  These are my additional comments after having had additional time to digest:

  • I agree with sICH as a Level A recommendation.  Both RCTs and observational registries tend to support such a recommendation.  Whether the pooled risk estimates are usable in knowledge translation to individual patients is less clear.  The risk of sICH is highly variable depending on individual patient substrate.  There are several risk stratification instruments described in the literature, but none are specifically recommended/endorsed/prospectively validated in large populations.
  • It is uncertain regarding the NINDS data whether their intention is to present pooled Part 1 and Part 2.  The prior clinical policy used only Part 2 for their NNT calculation, giving rise to an NNT of 8 instead of 6.  It appears they are pooling the data from both parts here.  Either is fine as long as it’s explicitly stated – the primary outcome differed, but the enrollment and eligibility should have been the same.
  • ECASS seems to be missing from their evidentiary table.  The ECASS 3-hour cohort data is available as a secondary analysis.  However, such would probably be Class III data of no real consequence for the recommendation.
  • Level B is probably an acceptable level of recommendation for tPA within the 0-3 hour window.  “Moderate clinical certainty” is reasonable, mostly on the strength of the Class III data.  However, the “systems in place to safely administer the medication” is not clearly addressed in the text.  Most of the published clinical trial and observational evidence involves acute evaluation by stroke neurology.  Does the primary stroke center certification practically replicate the conditions in which patients were enrolled in these trials/registries?  Perhaps this should be split out into a separate recommendation regarding the required setting for safe/timely/accurate administration.
  • Level B is difficult to justify for the 3 to 4.5 hour time window.  There is Class II evidence from ECASS III (downgraded due to potential for bias) demonstrating a small benefit.  The authors then cite Class III trial evidence from IST-3 and ATLANTIS in which no benefit was demonstrated.  Then, they cite the individual patient meta-analysis having similar effect size to ECASS III – because many of the patients in that subgroup come from ECASS III.  Basically, there’s only a single piece of Class II evidence and then inconsistent Class III evidence, which doesn’t meet criteria state for a Level B recommendation (1 or more Class of Evidence II studies or strong consensus of Class of Evidence III studies).  
  • With both Level B recommendations, the authors also reference “carefully selected” patients, but do not cite evidentiary basis regarding how to select said patients other than listing the enrollment criteria of trials.  If the “careful selection” is strict NINDS or ECASS III criteria, this should be explicitly stated in the recommendation.
  • The Level C recommendations to have shared decision-making with patients and surrogates ought to be obvious standard medical practice, but I suppose it bears repeating given the publications regarding implied consent for tPA.  They mention two publications regarding review and development of such tools, but there is no evidence supporting their efficacy or effectiveness in use.  Frankly, calling them a starting point in such a heterogenous population is along the lines of the broken clock that’s right twice a day.  I would rather say their dependence on group-level data minimizes their practical utility, and clinician expertise will be the best tool for individual patient risk assessment.

Feel free to add your comment and I will incorporate them into my feedback to SAEM.

Who Loves Tamiflu?

Those who are paid to love it, by a wide margin.

This brief evaluation, published in Annals of Internal Medicine, asks the question: is there a relationship between financial conflicts-of-interest, and the outcomes of systematic reviews regarding the use of neuraminidase inhibitors for influenza?  To answer such a question, these authors reviewed 37 assessments in 26 systematic reviews, published between 2005 and 2014, and evaluated the concluding language of each as “favorable” or “unfavorable”.  They then checked each author of each systematic review for relevant conflicts of interest with GlaxoSmithKline and Roche Pharmaceuticals.

Among those systematic reviews associated with author COI, 7 of 8 assessments were rated as “favorable”.  Among the remaining 29 assessments made without author COI, only 5 were favorable.  Of the reviews published with COI, only 1 made mention of limitations due to publication bias or incomplete outcomes reporting, versus most of those published without COI.

Shocking findings to all frequent readers, I’m sure.

“Financial Conflicts of Interest and Conclusions About Neuraminidase Inhibitors for Influenza”
http://www.ncbi.nlm.nih.gov/pubmed/25285542

Original link in error … although, it’s a good article, too!
http://www.ncbi.nlm.nih.gov/pubmed/24218071

Patient Satisfaction: It’s Door-to-Room Times (Duh)

As customer satisfaction becomes rapidly enshrined as our reimbursement overlord, we are all eager to improve our satisfaction scores.  And, by scores, I mean: Press Ganey.

So, as with all studies attempting to describe patient satisfaction, we unfortunately depend on the validity of the proprietary Press Ganey measurement instrument.  This limitation acknowledged, these authors at Oregon Health and Science University have conducted a single-center study, retrospectively linking survey results with patient characteristics, and statistically evaluating associations using a linear mixed-effects model.  They report three survey elements:  overall experience, wait time before provider, and likelihood to recommend.

Which patients were most pleased with their experience?  Old, white people who didn’t have to wait very long.  Every additional decade in age increased satisfaction, every hour wait decreased satisfaction, and there was a smattering of other mixed effects based on payor source, ethnicity, and perceived length of stay.  What’s interesting about these results – despite the threats to validity and limitations inherent to a retrospective study – is how much the satisfaction outcomes depend upon non-modifiable factors.  You can actually purchase patient experience consulting from Press Ganey, and they’ll come teach you and your nurses a handful of repackaged common-sense tricks – but I’m happy to save your department the money:  door-to-room times.

Or change your client mix.

Done.

“Associations Between Patient and Emergency Department Operational Characteristics and Patient Satisfaction Scores in an Adult Population”
http://www.ncbi.nlm.nih.gov/pubmed/25182541

Finding Value in Emergency Care

The Choosing Wisely initiative is, despite its flaws, an necessary cultural shift in medicine towards reducing low-value expenditures.  In a fee-for-service health system, and given the complicated financial framework associated with training and reimbursing physicians, noble endeavors such as these face significant challenges.

Regardless, many specialties – including ACEP – have published at least one “Top 5” list of recommended practices to improve value.  The ACEP Choosing Wisely list, while certainly reflecting sound medical practice, are of uncertain incremental value or applicability over current practice.  Additionally, the methods and stakeholders involved remain opaque.  Luckily, in a coincidental parallel, Partners Healthcare embarked on an unrelated internal process to improve the affordability of healthcare and reduce costs.  This study provides a transparent look at a such a process, as well as the ultimate findings.

Using an expert panel, 64 potential low-value care practices were identified in a brainstorming session, subsequently narrowed down to 17.  Then, 174 physicians and clinical practioners responded to a web-based survey, ranking the value of each.  Based on this feedback, then, the original expert panel voted again on a final “Top 5”:

  • Do not order computed tomography (CT) of the cervical spine for patients after trauma who do not meet the National Emergency X-ray Utilization Study (NEXUS) low-risk criteria or the Canadian C-Spine Rule.
  • Do not order CT to diagnose pulmonary embolism without first risk stratifying for pulmonary embolism (pretest probability and D-dimer tests if low probability). 
  • Do not order magnetic resonance imaging of the lumbar spine for patients with lower back pain without high-risk features. 
  • Do not order CT of the head for patients with mild traumatic head injury who do not meet New Orleans Criteria or Canadian CT Head Rule. 
  • Do not order coagulation studies for patients without hemorrhage or suspected coagulopathy (eg, with anticoagulation therapy, clinical coagulopathy).

It always surprises me to see these lists – which, essentially, just constitute sound, evidence-based practice.  That said, given my exposure primarily to an academic, teaching environment, rather than a community hospital environment concerned with expediency and revenue generation, these may be larger problems than I expect.  This list also does not address the estimated cost-savings associated with adherence to these “Top 5” best practices.  While many of these may result in significant cost-savings through reductions in imaging, the yield would be variable based on the quality of care already in place.

Regardless, this list is as much about the derivation process itself, rather than the resulting “Top 5”.  Certainly, the transparency documented in this study is superior to the undocumented process behind ACEP’s contribution.  That said, this list ultimately reflects the biases and practice patterns of a single healthcare network in Massachusetts; your mileage may vary.  Many of the final “Top 5” had overlapping confidence intervals on the Likert Scale for benefit and actionability, suggesting a different survey instrument may have provided better discrimination.  Finally, while we are culturally enamored with “Top 5” lists – all 64 of their original set are important considerations for improving the value of care.

We, and all of medicine, have a long way to go – but these are steps along the right path.  It is also critically important we also (wisely) choose our own destiny – rather than wait for government or insurance administrators to enforce their misguided priorities upon our practice.

“A Top-Five List for Emergency Medicine: A Pilot Project to Improve the Value of Emergency Care”
https://archinte.jamanetwork.com/article.aspx?articleid=1830019

My ACEP tPA Policy Critique

As some of you are aware, there is controversy regarding the use of tPA for acute ischemic stroke.  To this end, ACEP has opened up public comment for their recent relevant Clinical Policy Statement.

The comment form, however, is a bit of an odd and onerous format.  To spark discussion, to provide inspiration – and for public feedback/comment/correction – here are a few points from the initial draft of my response:
Page 225, Line 3, author list:
Area of Content / Concept – Conflict of Interest in Guideline Development Group (GDG)
The Institute of Medicine publishes recommendations regarding the composition and conflict of interest disclosures of a Clinical Practice Guideline (CPG) writing panel.  This tPA policy statement falls short on several accounts, most importantly:
Standard 2.1:  
The disclosures listed by the authors only narrowly address their direct financial relationships, but do not describe non-commercial, intellectual, and patient/public activities pertinent to the scope of the CPG.  For instance, authors do not fully describe their relationships with FERNE, a pharmaceutical-supported organization, nor other indirect and intellectual activities relevant to the CPG.
Standard 2.3:
The recommendation states members of the GDG should not participate in marketing activities or advisory board of entities whose interests are affected by the recommendations.  This standard does not appear to be met.
Standard 2.4:
Whenever possible, GDG members should not have COI.  If this is not possible, members with COIs should represent only a minority.  The chair, or co-chair, should not have COI.  This standard does not appear to have been met.
Standard 3.1:
The GDG should be multidisciplinary with methodological experts, clinicians, and patients.  The GDG members appear to be primarily clinicians and administrators, rather than patients and methodologic experts.
Standard 7:
The external review process is only briefly described, and the guideline was not open to public comment until this 60-day period.
Additional comments on the integrity of CPG come from a recent BMJ article, “Ensuring the integrity of clinical practice guidelines:  a tool for protecting patients.”  The recommendations from this publication are similar to the IOM recommendations, with the addition the GDG members ought to represent diverse viewpoints regarding the topic in question.  It does not appear this CPG represented any point of view other than a pro-tPA viewpoint.  The ACEP tPA clinical policy was evaluated using the “red flag” methodology by this article and found to be lacking.
Based on the COIs identified in this GDG, the output lacks face validity.
Page 227, Line 23:
Area of Content / Concept – Patient Management Recommendations
The guideline specifies offering IV tPA to acute ischemic stroke patients within 3 hours to be a Level A recommendation.  A Level A recommendation, according to the methodology of the CPG, states this indicates “Generally accepted principles for patient management that reflect a high degree of clinical certainty.”
The evidentiary table cites several articles as Class I evidence specifically relevant to the 3 hour timeframe (NINDS, ECASS II, and ATLANTIS B 0-3).  NINDS, by all accounts, shows benefit, while the effect size is debated by many based on the baseline characteristics of the treatment groups.  ECASS II is a negative trial; it is inappropriate to apply a subgroup analysis consisting of the 158 patients treated within 3 hours as the same level of Class I evidence.  The ATLANTIS B publication cited is also a very small subgroup analysis of a heavily modified trial stopped early for futility, and should not be considered the same level of Class I evidence.
Notably missing from this evidentiary table is ATLANTIS Part A – cited in the text, but apparently not considered as contributory to the CPG.  This is randomized, placebo-controlled evidence stopped early due to patient harms in the tPA cohort.  This merits inclusion in the evidentiary table as evidence of the harms of IV tPA.
The Lees meta-analysis, among others performed prior to 2010, is appropriately level II evidence.  However, it should be noted there are significant methodologic concerns associated with performing a meta-analysis that includes trials stopped early by their sponsors for futility or harms alongside trials allowed by their sponsors to run to their conclusion.  The evidentiary table notes “some of the analyzed studies were industry supported.”  To be more precise, all of the listed studies have substantial COI with industry – including employees of the sponsoring corporations listed among study authors – except NINDS, where the COI is minimized.
The remaining observational evidence is not relevant to a Level A recommendation.  The Hill and Wahlgren Phase IV studies are not placebo-controlled and only offer substantially limited, indirect, adjusted comparisons to a non-tPA treated population regarding safety and effectiveness.
My point, therefore, is NINDS is unique in support of tPA.  It is irresponsible to base a Level A treatment recommendation on a single positive study with a disputed effect size, whose results cannot be considered externally valid to current stroke practice.  NINDS – along with most evidence cited here – describes use in controlled trial environments of academic stroke centers supported by stroke neurologists.  There is insufficient evidence to support its general effectiveness, compared with standard treatment, in community Emergency Medicine.  Additionally, the observational evidence cited in the evidentiary table clearly describes a heterogenous acute stroke population, with varying levels of sICH risk, mortality risk, and capacity to benefit.  A CPG should not make a global recommendation for treating such a heterogenous disease without providing tools for physicians to communicate individualized risks and benefits.  Unfortunately, the placebo-controlled data are of insufficient quantity and quality to guide therapy.
A demonstration of the dangers of basing treatment decisions on small trials and small effect sizes is based in statistical theory.  Dr. Ioannidis demonstrates how “Most published research findings are false”, a hypothesis apparently borne out by “A decade of reversal:  an analysis of 146 contradicted medical practice.”
Furthermore, a Level A recommendation constitutes “generally accepted principles” with a “high degree of clinical certainty”.  If this were, indeed, the case, tPA for acute ischemic stroke would not be a controversial therapy 18 years past its introduction.  Respected U.S. Emergency Medicine experts in critical appraisal and knowledge translation, e.g., Jerome Hoffman, David Schriger, David Newman, Anand Swaminathan, and many others feel tPA for stroke remains an unproven and inadequately described therapy for acute ischemic stroke.  Indeed, Dr. Swaminathan ably debates Dr. Jagoda, one of the authors of this Clinical Policy in podcast, hosted by Scott Weingart.

Citation:  http://emcrit.org/podcasts/tpa-for-ischemic-stroke-debate/

Lest ACEP consider these objections simply a vocal minority or lunatic fringe, it should be noted active debate continues in the international literature as well.  Just last year, the BMJ posted a “Head to Head” debate regarding the proven efficacy of tPA in acute ischemic stroke.  An unscientific poll accompanying the article reported a slim majority of respondents did not feel tPA was proven effective.

Citation:  http://www.bmj.com/content/347/bmj.f5215

Additionally, there remains disagreement between international professional societies regarding the use of tPA in acute ischemic stroke.  While several countries endorse its use, others, such as Australia and New Zealand, remain concerned the strength of the evidence does not support widespread use.

Citations:
http://bit.ly/1nY6P18
http://bit.ly/1gRpYRs

In summary, the evidence does not support a Level A recommendation for tPA for ischemic stroke within 3 hours.

There is no reasonable justification for anything higher than a Level B recommendation – and even then, a caveat stating this has never been demonstrated as efficacious compared to usual therapy outside of controlled clinical trial environments.  Physicians may consider offering this therapy based on comfort, diagnostic certainty, supporting resources, and institutional commitment, but it should not be considered the standard of care.

Page 227, Line 28:
Area of Content / Concept – Patient Management Recommendations
The Level B recommendation given to IV tPA is inappropriate given the lack of unbiased evidence in support of treatment beyond the 3 hour time window.  This therapy is not approved in the U.S. and the Class I evidence regarding the 3-4.5 hour time window is conflicting.  ECASS, ECASS II, and ECASS III are manufacturer-sponsored studies of which only ECASS III demonstrated benefit.  ATLANTIS, also manufacturer-sponsored, enrolled patients similar to the NINDS criteria and showed harms beyond 5 hours, futility in the 3-5 hour window, and no usable insights in the 0-3 hour timeframe.  As Clark and Madden note, “Keep the three hour tPA window: the lost study of Atlantis” – ECASS III alone is not enough to refute prior evidence of either futility or harm.  There is a reason why the FDA still has not approved IV tPA beyond 3 hours.
ECASS III is also flawed regarding a baseline imbalance of prior stroke included in the placebo arm.  The absolute difference in percentage of patients with prior stroke is identical to the effect size offered by IV tPA.  Even more concerning regarding the internal validity of the findings, ECASS III offers this COI statement:
Supported by Boehringer Ingelheim.
Dr. Hacke reports receiving consulting, advisory board, and lecture fees from Paion, Forest Laboratories, Lundbeck, and Boehringer Ingelheim and grant support from Lundbeck; Dr. Brozman, receiving consulting and lecture fees from Sanofi- Aventis and consulting fees and grant support from Boehringer Ingelheim; Dr. Davalos, receiving consulting fees from Boeh- ringer Ingelheim, the Ferrer Group, Paion, and Lundbeck and lecture fees from Boehringer Ingelheim, Pfizer, Ferrer Group, Paion, and Bristol-Myers Squibb; Dr. Kaste, receiving consulting and lecture fees from Boehringer Ingelheim; Dr. Larrue, receiv- ing consulting fees from Pierre Fabre; Dr. Lees, receiving con- sulting fees from Boehringer Ingelheim, Paion, Forest, and Lund- beck, lecture fees from the Ferrer Group, and grant support from Boehringer Ingelheim; Dr. Schneider, receiving consulting fees from the Ferrer Group, D-Pharm, BrainsGate, and Stroke Treat- ment Academic Industry Round Table (STAIR) and lecture fees from Boehringer Ingelheim and Trommsdorff Arzneimittel; Dr. von Kummer, receiving consulting fees from Boehringer Ingel- heim and Paion and lecture fees from Boehringer Ingelheim and Bayer Schering Pharma; Dr. Wahlgren, receiving consulting fees from ThromboGenics, lecture fees from Ferrer and Boehringer Ingelheim, and grant support from Boehringer Ingelheim; Dr. Toni, receiving consulting fees from Boehringer Ingelheim and lecture fees from Boehringer Ingelheim, Sanofi-Aventis, and Novo Nordisk; and Drs. Bluhmki, Machnig, and Medeghri, being employees of Boehringer Ingelheim.
As I did not mention it previously, the meta-analysis by Lees also supplies this relevant COI statement:
KRL, RvK, DT, MK, and WH report honoraria from Boehringer Ingelheim for their roles in conduct of the ECASS trials. KRL reports honoraria from Lundbeck and Thrombogenics. DT reports honoraria from Novo-Nordisk, Pfizer, Sanofi-Aventis, and Boehringer Ingelheim. EB is an employee of Boehringer Ingelheim. RvK and JCG report being consultants to Lundbeck. GWA reports being a consultant to Lundbeck and Boehringer Ingelheim. SMD reports honoraria from Boehringer Ingelheim.
Confident translation in policy of clinical trial evidence having this level of COI is simply not reasonable.

Finally, it should also be noted the updated meta-analysis by Wardlaw accompanying the publication of IST-3 no longer shows a statistically significant benefit for alive and independent for the 3-6h time frame – moving from OR 1.17 (1.00 – 1.36) to OR 1.07 (0.96 – 1.20).  This ought to be viewed as regression to the mean as the sample size continues to increase.  Considering thrombolytic trials for myocardial infarction enrolled 140,000 patients, rather than the ~3500 tPA patients from the trials included in the Wardlaw meta-analysis, this should serve as a warning regarding the inadequacy of current evidence.

In summary, treatment beyond 3 hours can only be recommended based on “expert” (e.g., the sponsored mouthpieces of industry) opinion, should be considered only as part of prospective research, and absolutely not be recommended or implied to be standard of care.
Page 232, Line 28:
Area of Content / Concept – NINDS Exclusion Criteria
There is no mention of oral anticoagulation in these treatment recommendations, other than reference to the NINDS exclusion criteria regarding PTT and PT.
The safety of tPA given concomitant use of coumadin and the novel oral anticoagulants (direct thrombin inhibitors, factor Xa inhibitors) is not established.  Contradictory findings from meta-analyses and systematic reviews suggest increased risk of bleeding, even with INR <1.6.  Any CPG recommending offering IV tPA is remiss in excluding mention of these commonly prescribed medications in the population most at risk for stroke.
tPA cannot yet be considered safe when considered in the setting of anticoagulation, despite the NINDS inclusion criteria, in the absence of high-quality data on the subject.
Citations: