Prescribing Opiates to the Entire House

Opiate prescribing has blossomed into an appropriately huge issue in the current medical landscape. A fair bit of thought now goes into evaluating individuals for their potential for use and misuse – including even state-mandated prescription database review.

But, this interesting analysis suggests it should not only be the individual recipient considered when prescribing – but the impact on the health of the entire household. These authors compared administrative health care claims from 12,695,280 patients with a family member prescribed opiates against 6,359,639 patients whose family members were prescribed a non-opiate analgesic. Within one year, 11.68% of family members of those prescribed an opiate subsequently received their own, compared with 10.60% in the non-opiate cohort. After statistical adjustment, the absolute difference narrowed somewhat, and the authors also report their sensitivity analysis cannot rule out invalidation of their findings by an unmeasured confounder.

Regardless, this fits with my anecdotal experience – where many patients coming in for musculoskeletal pain have used a family member’s leftover opiate medication for breakthrough pain control. Despite the underlying limitations from this statistical analysis, it certainly seems to have face validity. It is reasonable to consider not just the individual patient being prescribed opiates, but also the risk to the household as being a gateway to subsequent opiate prescribing for family members.

“Association of Household Opioid Availability and Prescription Opioid Initiation Among Household Members”
https://jamanetwork.com/journals/jamainternalmedicine/article-abstract/2664515

Atraumatic Spinal Needles are Less Traumatic

It’s a tautology!

In a solid “not news, but newsworthy” systematic review and meta-analysis published in The Lancet, these authors pooled data from 110 trials comparing conventional (“cutting”) spinal needles with “atraumatic” ones. The atraumatic ones, after all, are thought to result in less tissue damage and corresponding complications. The perceived downside to the atraumatic needles, however, is related to potentially decreased procedural success.

In short, none of the results favor the conventional needles.  The sample sizes for each measure ranged from 24,000 patients to 1,000, with most right in the middle of the range.  These authors evaluated incidence of such complications as post-procedural headaches, need for analgesia, need for epidural blood patch, nerve root irritation, or hearing disturbances. With regard to procedural success, these authors evaluated the traumatic taps, first attempt success, and overall procedural failure rate.

The magnitude of reduction in various complications was wide, but consistent. In an absolute sense, any post-procedural headache associated with use of atraumatic needles was from 12% to 7%, and the need for epidural blood patch decreased from 2% to 1%.  With regard to any reduction in procedural success, no signal of difference was observed.

The authors accurately report there is low awareness of the advantages of the atraumatic needles among clinicians. These data, even if not novel, at least are published on an adequate platform to improve awareness of the superior alternative.

“Atraumatic versus conventional lumbar puncture needles: a systematic review and meta-analysis”
https://www.ncbi.nlm.nih.gov/pubmed/29223694

What Does the ACC Say About OAC Reversal?

Just in case you were curious ….

Conventional tests useful for ruling out clinically relevant levels contributing to bleeding risk:

  • Dabigatran – a normal Thrombin Time or sensitive activated partial thromboplastin time (aPTT).
  • Factor Xa-inhibitors – None.

If you have access to Anti-Xa specialized assays, they can be used to measure the level of activity for the Factor Xa-inhibitors.

Managing OAC-associated bleeding:

  • Warfarin – 4-factor prothrombin concentrate complexes (PCCs) at weight-based dosing between 25 units/kg and 50 units/kg based on INR.
  • Dabigatran – Idarucizumab.
  • Factor Xa-inhibitors – 4-factor PCCs at 50 units/kg.

The authors also suggest use of PCCs as second line for idarucizumab, but this is likely to be fruitless and the evidence is very weak. Hemodialysis is also an option for removal of circulating dabigatran in a narrow set of clinical scenarios.

The authors also mention andexanet alfa and ciraparantag as potentially useful adjuncts at some point in the future, but no specific clinical role has yet been defined.

“2017 ACC Expert Consensus Decision Pathway on Management of Bleeding in Patients on Oral Anticoagulants”
http://www.onlinejacc.org/content/early/2017/11/10/j.jacc.2017.09.1085

Retiring Steroids for Hives

This is one of those perfectly unglamorous, yet infinitely practical sorts of topics we encounter in everyday Emergency Medicine. I must see a patient with urticaria, almost always without known underlying trigger or etiology, nearly every other shift. They are itching furiously, and, well – it’s an Emergency!

In true “don’t just stand there, do something!” fashion, I’ve done what I can to help. This typically means “something stronger”, something not over-the-counter, and is usually a dose of dexamethasone to augment antihistamine therapy.

This small trial of 100 patients randomized patients with uncomplicated urticaria to levocetirizine (a H1 receptor-blocker) plus 40 mg of prednisone for four days, or levocetirizine plus placebo. Patients were assessed at several subsequent time points for “itch score”, rash recurrence, and other adverse events – and the winner is: placebo! There was no obvious difference or trend favoring those patients receiving steroids.  There is, however, always the potential for Type II error with such a small sample, but when a positive outcome is difficult to demonstrate, the magnitude of effect is not likely to be large.

Interestingly, they screened 710 patients in order to enroll 100, with 412 not meeting inclusion criteria. These exclusions were mostly evenly distributed between the following criteria: angioedema or anaphylaxis, use of antihistamines or glucocorticoids prior to the ED visit, and rash of greater than 24 hours duration. These limitations do limit the generalizability of these findings, considering their study cohort was ultimately only about one-fifth of all comers. It is probably still reasonable to suggest from a Bayesian sense, at least, steroids should be assumed not to have value in somewhat wider a population than explicitly testing here, but this is not definitive.

“Levocetirizine and Prednisone Are Not Superior to Levocetirizine Alone for the Treatment of Acute Urticaria: A Randomized Double-Blind Clinical Trial”

https://www.ncbi.nlm.nih.gov/pubmed/28476259

Out of the Way!

The mobile stroke unit is the new, malignant, extravagant reaction to the “Time is Brain” mantra. However, not all locations are endowed with such an embarrassment of resources.

This very brief report details a systems approach in Mashhad, in Northeast Iran. In 2015, these authors reported only 1.2% of all strokes received treatment with IV tPA, and their analysis indicated prehospital delays were their primary issue. Rather than take the CT to the patient, their far more entertaining solution was simply to clear a path. To relieve delays from chronic traffic congestion and gridlock, they designed an online control system for all traffic lights to be activated based on the severity of the emergency medical condition. Green lights in a continuous path from the incident location to the medical facility help activate traffic flow to allow ambulances room to maneuver. These authors report their successful implementation reduced prehospital transfer time by 50%, although absolute measures are not reported.

Now, they do mention their next step is to report on improvements in patient-oriented outcomes. However, unless the traffic is truly catastrophic, I expect improvements in anything but process surrogates will be difficult to detect.

“Time is brain: An online controlling of traffic lights can save lives”
http://emj.bmj.com/content/early/2017/11/24/emermed-2017-206888

Yet Another Failure to Prevent Contrast-Induced Nephropathy

I’m not the first one to this party, but this is worth a short note to touch upon, regardless, in case you missed it before the holiday break. I’ve written about retrospective propensity-matched analyses and other data suggesting the impact of contrast administration on acute kidney injury is overstated. This is yet another piece of the puzzle supporting these conclusions.

This is a beautifully massive trial, the PRESERVE Trial, with 5,177 patients enrolled in a 2×2 factorial design to test the impact of sodium bicarbonate and acetylcysteine on kidney injury following coronary angiography. This study was conducted in the United States, Australia, Malaysia and New Zealand, and was planned to enroll 7,680 to detect an increase in the primary end point of 8.7% to 6.5% for each trial intervention. As you might now have gathered, they stopped the trial early after an interim analysis when their statistical analysis met criteria for futility. The incidence of the primary end point, a composite between increase in creatinine, dialysis, and death, was effectively identical between each of the various arms, as were non-renal adverse events.

The short takeaway from these data: if contrast-induced nephropathy cannot be prevented by any available treatment, is it a true clinical entity at the doses currently used in clinical practice? Or, rather, do the clinically ill simply suffer kidney injury, regardless?

“Outcomes after Angiography with Sodium Bicarbonate and Acetylcysteine”
http://www.nejm.org/doi/full/10.1056/NEJMoa1710933

The Syncope Prophecy

There has been a lot of research regarding the disposition of patients with syncope from the Emergency Department. Unfortunately, to put it bluntly, little of it is effectively usable in general practice. The most recent AHA Syncope guidelines offer loose guidance patients should have risk-stratification performed prior to disposition, but their summary of decision-instruments admits the limitations of each.

And, as it turns out, we know better, anyway.

In this prospective study, Emergency Physicians explicitly recorded the suspected etiology of syncope at the time of disposition, choosing from four broad categories: vasovagal, orthostatic hypotension, cardiac, other/unknown. Physicians were also asked to rate their level of confidence regarding their diagnosis on a scale from 0% to 100%. Research personnel then performed typical observational follow-up to determine 30-day adverse outcomes.

Over the ~4 year study period, 5,010 patients were included in the final analysis. The average age was 53 years, with a wide standard deviation of 23 years. Generally speaking, most patients were healthy, with hypertension the most prevalent known underlying medical condition at 31.6%. Over 90% of patients had ECG and blood testing in the ED, with a minority receiving any radiography. Over half (53.3%) of the cohort received a provisional diagnosis of vasovagal syncope, with 32.2% “other/unknown”, 9.1% orthostatic hypotension, and 5.4% cardiac causes.

The good news: only 1.0% of the vasovagal cohort had an adverse outcome within 30 days, none of which were death. Then, as expected, 20.9% of the cohort with suspected cardiac cause suffered a serious outcome, although, only 0.8% died – an actuarially interesting statistic, considering the average age of this cohort was 86.5 years. The other large cohort of patients, those with “unknown” etiology, suffered serious outcomes 4.8% of the time, and their outcomes were spread evenly across the various cardiac and non-cardiac outcomes.

A long story short, then – physicians do a pretty good job of identifying those who are at low risk and high risk for serious outcomes.  Considering the imprecision of decision instruments and their limitations, it turns out the best computer … is probably still the trained brain.  These data don’t have quite the granularity to decipher whether the low rates of adverse outcomes in the “other/unknown” cohort were otherwise related to specific diagnoses or underlying comorbidities, but it’s not a stretch to speculate physicians probably could also have prognosticated fairly well on gestalt even within this category.

Worth noting, as well, in comparison to the PESIT study, the prevalence of undiagnosed pulmonary embolism in this population was 0.2%.

“Syncope Prognosis Based on Emergency Department Diagnosis: A Prospective Cohort Study”

https://www.ncbi.nlm.nih.gov/pubmed/29136314

“DAWN” of the Mismatch Era

It wasn’t truly so long ago the treatment for an acute stroke was virtually nonexistent. Then, it progressed, with patients eligible for treatment within 3 hours … then 4.5 hours … then 6 … and, now, 24 hours. Whatever happened to “time is brain”? How can we possibly be treating patients out to a day after onset of symptoms?

This is the DAWN trial, randomizing patients with acute ischemic stroke to endovascular intervention or medical management within 6 or 24 hours of symptom onset. Eligibility criteria included occlusion of the internal carotid or proximal middle cerebral artery paired with one of three different clinical syndrome/infarct core mismatches based on CT or MRI perfusion imaging: three cohorts with NIHSS of 10 or 20, with differing sizes of infarct cores. The underlying theory here stems from observations of the viability of cerebral tissue as dependent upon collateral circulation, rather than simply the linear passage of time.

This is, as you might already have gathered from the press releases, a positive study. Unfortunately, it was so positive it was stopped early for benefit based on a primary outcome these authors almost certainly created uniquely to support early termination of these sorts of trials: the “utility-weighted modified Rankin scale”. Rather than use the traditional mRS as in all other stroke trials, or, even, the statistically flawed “ordinal shift analysis”, these authors assigned point values to the various mRS categories, those with the least disability receiving the most points. This resulted in the potential enrollment of 500 patients being stopped at the earliest possible pre-specified interim analysis with a mere 200 patients enrolled.

Tossing out their nonsensical fake outcome measure for the more easily approachable mRS categories, 52 of 107 (49%) of thrombectomy patients were functionally independent (mRS 0-2) at 90 days versus 13 of 99 (13%) of those in the control group. These results were roughly consistent across their various subgroup analyses, although, with such a small trial, the confidence intervals get awfully wide, awfully quickly. That said, despite all the other associated trial shenanigans, it is fairly obvious this sort of treatment is helpful to patients. I’ve been preaching tissue-based approaches to therapy for a couple years now, and despite this trial’s individual issues, in a Bayesian sense these results are consistent with prior evidence.

Of course, this does not actually indicate the window for screening ought to be 24 hours as will likely be justified from these data – the bounds of eligibility for the study do not simply translate subsequently to clinical policy recommendations. The study design does explicitly stratify patients to 6 to 12 hour and 12 to 24 hour cohorts, but the IQR range for “time from symptom onset” for the entire cohort is 10.2 to 16.3 hours, implying approximately half the 12 to 24 hour cohort was actually randomized between 12 and 16 hours. This leaves a paucity – approximately 25 patients in each arm – of data to inform treatment in the 16 to 24 hour window. Contrariwise, these data also do not explicitly exclude patients beyond 24 hours as potential candidates for intervention, as this is a tissue-based, not time-based, paradigm. Further prospective study will be needed to determine the precise time window at which perfusion screening for large vessel occlusions ultimately becomes so low-yield there is no value in the pursuit.

These authors also do not provide useful information regarding the number of patients screened for possible inclusion. Much will be made of these results, with a likely profound impact on our approach to stroke. To properly design stroke systems of care and project resource utilization, physicians and policy makers need data regarding the clinical characteristics of all patients evaluated and those features best identifying those who ought be triaged or transferred to specialized centers.

Finally, of course, there is the perpetual elephant in the room – heavy involvement from the sponsor in the conduct of the study, along with multiple authors on the payroll. These financial conflicts of interest always threaten internal and external validity by limiting generalizability and amplifying apparent effect sizes. All this said, however, this is probably an important step forward in the evolution in our approach to stroke.

“Thrombectomy 6 to 24 Hours after Stroke with a Mismatch between Deficit and Infarct”
http://www.nejm.org/doi/full/10.1056/NEJMoa1706442

It’s SAH Silly Season Again!

A blustery, relentless wind is blowing the last brittle teeth from the trees here in late November – which must mean it’s time to descend, yet again, into decision-instrument madness. Today’s candidate/culprit:

The Ottawa Subarachnoid Rule, once-derived, now-validated in this most recent publication highlighting their prospective, observational, multi-center follow-up to the original. The components are as you see above, and the cohort eligible for inclusion were neurologically intact “adult patients with nontraumatic headache that had reached maximal intensity within 1 hour of onset”. Over four years, six Canadian hospitals, and a combined annual census of 365,000, these authors identified 1,743 eligible patients with headache, 1,153 of whom consented to study inclusion and follow-up. Of these, there were 67 patients ultimately diagnosed with SAH, and the Rule picked up all of them for a sensitivity of 100% (95% CI 94.6% to 100%) – and a specificity of 13.6% (95 CI 13.1% to 15.8%).

Unfortunately, take the infographic above and burn it, because, frankly, their route to 100% sensitivity is, essentially: everyone needs evaluation.  This can be reasonable when the disease is life-threatening, such as this, but the specificity is so poor in a population with such a low prevalence the rate of evaluation becomes absurd.

If their rule had been followed in this cohort, the rate of investigation would have been 84.3% – or, 972 patients evaluated in order to pick up the 67 positives. Then, in the context of usual practice in this cohort, the investigation rate was 89.0%. That means, over the course of 4 years in these six hospitals, use of this decision instrument would have saved 1 fewer patient from an investigation for SAH every six months. However, the hospitals included for this validation were also the same ones who assisted in the derivation, meaning their practice was likely already based around the rule. I expect, in most settings, this decision instrument will increase the rate of investigation – and do so without substantially improving sensitivity.

Furthermore, their definition also includes patients with a diagnosis of non-aneurysmal SAH who did not undergo intervention, a cohort in whom the diagnosis is of uncertain clinical significance.  If only those with aneurysms and morbidity/mortality-preventing interventions were included, the prevalence of disease would be even lower.  We would then be looking at even fewer true positives  for all this resource expenditure.

The other issue with a rule in which ~85% of patients undergo investigation for headache is the indication creep that may occur when physicians apply the rule outside the inclusion criteria for this study. The prevalence of SAH here is very high compared with the typical ED population presenting with headache. If less strict inclusion criteria are used, the net effect willy likely be to increase low-value investigations in the overall population. Dissemination of this decision instrument and the downstream application to other severe headaches in the ED will likely further degrade the overall appropriateness of care.

Finally, just as a matter of principle, the information graphic is inappropriate because it implies a mandated course of medical practice. No decision instrument should ever promote itself as a replacement for clinical judgment.

“Validation of the Ottawa Subarachnoid Hemorrhage Rule in patients with acute headache”

http://www.cmaj.ca/content/189/45/E1379.abstract

The Acetaminophen/Ibuprofen Ascendancy

The new hotness of the day is the piece out of JAMA comparing our various oral analgesic treatment options for acute pain the Emergency Department. This relatively mundane line of research has been relatively fertile over the last few years, so, what do we have in store here?

This was a double-blind, placebo-controlled trial of four different analgesic combinations for acute extremity pain. To enter the trial, one of the criteria was receipt of an imaging study, which served twofold: an assumed proxy for more serious injuries and pain, and because it increased ED length-of-stay enough to reduce patients lost to follow-up for their primary outcome. The primary outcome, then, was reduction in pain on a 0 to 10 numerical rating scale at the 2-hour mark, with an interim 1-hour mark recorded as well.

The study drugs were as follows: 400 mg of ibuprofen and 1000 mg of acetaminophen; 5 mg of oxycodone and 325 mg of acetaminophen; 5 mg of hydrocodone and 300 mg of acetaminophen; or 30 mg of codeine and 300 mg of acetaminophen. Approximately 100 patients per arm were targeted from their sample size calculations, and they ultimately randomized 416 into generally similar groups with respect to final diagnoses.

The outcomes are essentially a wash – raising a question of whether there is any advantage to opiate therapy for this indication.  In our beautiful public health tapestry of increasing opiate misuse and addiction, any opportunity to reduce opiate prescribing is important.  There are some reasonable takeaways with respect to the relative efficacy of ibuprofen/acetaminophen, oxycodone/acetaminophen,  hydrocodone/acetaminophen and codeine/acetaminophen combinations, but their clinical relevance is highly questionable considering the doses tested in this study.  This is, unfortunately, essentially a straw-man comparison between an adequate dose of non-opiate analgesia compared with the least-adequate preparation of each of the commonly used combination opiate products.  A proper comparison in patients with severe pain ought to use a more typical maximal dose, which would probably be twice as much of each of the combination opiate products.

There are a few other small oddities relating to this study, of course. As an unavoidable consequence of the study setting, 60% of their study cohort identified as Latino and another 31% identified as black. There are potential genetic differences in pharmacokinetics relating to ethnicity, as well as cultural factors relating to the cohort enrolled at the study site, so the generalization of these data requires some caution. The study protocol states patients were to be asked whether they were satisfied with their pain control and side effects were to be recorded (nausea, vomiting, itchiness, etc.), but these are not reported in the final manuscript or supplement. Finally, these data are also limited, essentially, to sprains, fractures, and contusions. This represents an important slice of outpatients seeking analgesia, but may not be applicable to other types of pain.

Overall, however, this is reasonable evidence to support strategies of combination non-opiate therapy in patients without contraindications to both acetaminophen and ibuprofen.  It should not, however, be offered as evidence of the disutility of commonly used combination opiate preparations.

“Effect of a Single Dose of Oral Opioid and Nonopioid Analgesics on Acute Extremity Pain in the Emergency Department”

https://jamanetwork.com/journals/jama/article-abstract/2661581