Yet Another Failure to Prevent Contrast-Induced Nephropathy

I’m not the first one to this party, but this is worth a short note to touch upon, regardless, in case you missed it before the holiday break. I’ve written about retrospective propensity-matched analyses and other data suggesting the impact of contrast administration on acute kidney injury is overstated. This is yet another piece of the puzzle supporting these conclusions.

This is a beautifully massive trial, the PRESERVE Trial, with 5,177 patients enrolled in a 2×2 factorial design to test the impact of sodium bicarbonate and acetylcysteine on kidney injury following coronary angiography. This study was conducted in the United States, Australia, Malaysia and New Zealand, and was planned to enroll 7,680 to detect an increase in the primary end point of 8.7% to 6.5% for each trial intervention. As you might now have gathered, they stopped the trial early after an interim analysis when their statistical analysis met criteria for futility. The incidence of the primary end point, a composite between increase in creatinine, dialysis, and death, was effectively identical between each of the various arms, as were non-renal adverse events.

The short takeaway from these data: if contrast-induced nephropathy cannot be prevented by any available treatment, is it a true clinical entity at the doses currently used in clinical practice? Or, rather, do the clinically ill simply suffer kidney injury, regardless?

“Outcomes after Angiography with Sodium Bicarbonate and Acetylcysteine”
http://www.nejm.org/doi/full/10.1056/NEJMoa1710933

The Syncope Prophecy

There has been a lot of research regarding the disposition of patients with syncope from the Emergency Department. Unfortunately, to put it bluntly, little of it is effectively usable in general practice. The most recent AHA Syncope guidelines offer loose guidance patients should have risk-stratification performed prior to disposition, but their summary of decision-instruments admits the limitations of each.

And, as it turns out, we know better, anyway.

In this prospective study, Emergency Physicians explicitly recorded the suspected etiology of syncope at the time of disposition, choosing from four broad categories: vasovagal, orthostatic hypotension, cardiac, other/unknown. Physicians were also asked to rate their level of confidence regarding their diagnosis on a scale from 0% to 100%. Research personnel then performed typical observational follow-up to determine 30-day adverse outcomes.

Over the ~4 year study period, 5,010 patients were included in the final analysis. The average age was 53 years, with a wide standard deviation of 23 years. Generally speaking, most patients were healthy, with hypertension the most prevalent known underlying medical condition at 31.6%. Over 90% of patients had ECG and blood testing in the ED, with a minority receiving any radiography. Over half (53.3%) of the cohort received a provisional diagnosis of vasovagal syncope, with 32.2% “other/unknown”, 9.1% orthostatic hypotension, and 5.4% cardiac causes.

The good news: only 1.0% of the vasovagal cohort had an adverse outcome within 30 days, none of which were death. Then, as expected, 20.9% of the cohort with suspected cardiac cause suffered a serious outcome, although, only 0.8% died – an actuarially interesting statistic, considering the average age of this cohort was 86.5 years. The other large cohort of patients, those with “unknown” etiology, suffered serious outcomes 4.8% of the time, and their outcomes were spread evenly across the various cardiac and non-cardiac outcomes.

A long story short, then – physicians do a pretty good job of identifying those who are at low risk and high risk for serious outcomes.  Considering the imprecision of decision instruments and their limitations, it turns out the best computer … is probably still the trained brain.  These data don’t have quite the granularity to decipher whether the low rates of adverse outcomes in the “other/unknown” cohort were otherwise related to specific diagnoses or underlying comorbidities, but it’s not a stretch to speculate physicians probably could also have prognosticated fairly well on gestalt even within this category.

Worth noting, as well, in comparison to the PESIT study, the prevalence of undiagnosed pulmonary embolism in this population was 0.2%.

“Syncope Prognosis Based on Emergency Department Diagnosis: A Prospective Cohort Study”

https://www.ncbi.nlm.nih.gov/pubmed/29136314

“DAWN” of the Mismatch Era

It wasn’t truly so long ago the treatment for an acute stroke was virtually nonexistent. Then, it progressed, with patients eligible for treatment within 3 hours … then 4.5 hours … then 6 … and, now, 24 hours. Whatever happened to “time is brain”? How can we possibly be treating patients out to a day after onset of symptoms?

This is the DAWN trial, randomizing patients with acute ischemic stroke to endovascular intervention or medical management within 6 or 24 hours of symptom onset. Eligibility criteria included occlusion of the internal carotid or proximal middle cerebral artery paired with one of three different clinical syndrome/infarct core mismatches based on CT or MRI perfusion imaging: three cohorts with NIHSS of 10 or 20, with differing sizes of infarct cores. The underlying theory here stems from observations of the viability of cerebral tissue as dependent upon collateral circulation, rather than simply the linear passage of time.

This is, as you might already have gathered from the press releases, a positive study. Unfortunately, it was so positive it was stopped early for benefit based on a primary outcome these authors almost certainly created uniquely to support early termination of these sorts of trials: the “utility-weighted modified Rankin scale”. Rather than use the traditional mRS as in all other stroke trials, or, even, the statistically flawed “ordinal shift analysis”, these authors assigned point values to the various mRS categories, those with the least disability receiving the most points. This resulted in the potential enrollment of 500 patients being stopped at the earliest possible pre-specified interim analysis with a mere 200 patients enrolled.

Tossing out their nonsensical fake outcome measure for the more easily approachable mRS categories, 52 of 107 (49%) of thrombectomy patients were functionally independent (mRS 0-2) at 90 days versus 13 of 99 (13%) of those in the control group. These results were roughly consistent across their various subgroup analyses, although, with such a small trial, the confidence intervals get awfully wide, awfully quickly. That said, despite all the other associated trial shenanigans, it is fairly obvious this sort of treatment is helpful to patients. I’ve been preaching tissue-based approaches to therapy for a couple years now, and despite this trial’s individual issues, in a Bayesian sense these results are consistent with prior evidence.

Of course, this does not actually indicate the window for screening ought to be 24 hours as will likely be justified from these data – the bounds of eligibility for the study do not simply translate subsequently to clinical policy recommendations. The study design does explicitly stratify patients to 6 to 12 hour and 12 to 24 hour cohorts, but the IQR range for “time from symptom onset” for the entire cohort is 10.2 to 16.3 hours, implying approximately half the 12 to 24 hour cohort was actually randomized between 12 and 16 hours. This leaves a paucity – approximately 25 patients in each arm – of data to inform treatment in the 16 to 24 hour window. Contrariwise, these data also do not explicitly exclude patients beyond 24 hours as potential candidates for intervention, as this is a tissue-based, not time-based, paradigm. Further prospective study will be needed to determine the precise time window at which perfusion screening for large vessel occlusions ultimately becomes so low-yield there is no value in the pursuit.

These authors also do not provide useful information regarding the number of patients screened for possible inclusion. Much will be made of these results, with a likely profound impact on our approach to stroke. To properly design stroke systems of care and project resource utilization, physicians and policy makers need data regarding the clinical characteristics of all patients evaluated and those features best identifying those who ought be triaged or transferred to specialized centers.

Finally, of course, there is the perpetual elephant in the room – heavy involvement from the sponsor in the conduct of the study, along with multiple authors on the payroll. These financial conflicts of interest always threaten internal and external validity by limiting generalizability and amplifying apparent effect sizes. All this said, however, this is probably an important step forward in the evolution in our approach to stroke.

“Thrombectomy 6 to 24 Hours after Stroke with a Mismatch between Deficit and Infarct”
http://www.nejm.org/doi/full/10.1056/NEJMoa1706442

It’s SAH Silly Season Again!

A blustery, relentless wind is blowing the last brittle teeth from the trees here in late November – which must mean it’s time to descend, yet again, into decision-instrument madness. Today’s candidate/culprit:

The Ottawa Subarachnoid Rule, once-derived, now-validated in this most recent publication highlighting their prospective, observational, multi-center follow-up to the original. The components are as you see above, and the cohort eligible for inclusion were neurologically intact “adult patients with nontraumatic headache that had reached maximal intensity within 1 hour of onset”. Over four years, six Canadian hospitals, and a combined annual census of 365,000, these authors identified 1,743 eligible patients with headache, 1,153 of whom consented to study inclusion and follow-up. Of these, there were 67 patients ultimately diagnosed with SAH, and the Rule picked up all of them for a sensitivity of 100% (95% CI 94.6% to 100%) – and a specificity of 13.6% (95 CI 13.1% to 15.8%).

Unfortunately, take the infographic above and burn it, because, frankly, their route to 100% sensitivity is, essentially: everyone needs evaluation.  This can be reasonable when the disease is life-threatening, such as this, but the specificity is so poor in a population with such a low prevalence the rate of evaluation becomes absurd.

If their rule had been followed in this cohort, the rate of investigation would have been 84.3% – or, 972 patients evaluated in order to pick up the 67 positives. Then, in the context of usual practice in this cohort, the investigation rate was 89.0%. That means, over the course of 4 years in these six hospitals, use of this decision instrument would have saved 1 fewer patient from an investigation for SAH every six months. However, the hospitals included for this validation were also the same ones who assisted in the derivation, meaning their practice was likely already based around the rule. I expect, in most settings, this decision instrument will increase the rate of investigation – and do so without substantially improving sensitivity.

Furthermore, their definition also includes patients with a diagnosis of non-aneurysmal SAH who did not undergo intervention, a cohort in whom the diagnosis is of uncertain clinical significance.  If only those with aneurysms and morbidity/mortality-preventing interventions were included, the prevalence of disease would be even lower.  We would then be looking at even fewer true positives  for all this resource expenditure.

The other issue with a rule in which ~85% of patients undergo investigation for headache is the indication creep that may occur when physicians apply the rule outside the inclusion criteria for this study. The prevalence of SAH here is very high compared with the typical ED population presenting with headache. If less strict inclusion criteria are used, the net effect willy likely be to increase low-value investigations in the overall population. Dissemination of this decision instrument and the downstream application to other severe headaches in the ED will likely further degrade the overall appropriateness of care.

Finally, just as a matter of principle, the information graphic is inappropriate because it implies a mandated course of medical practice. No decision instrument should ever promote itself as a replacement for clinical judgment.

“Validation of the Ottawa Subarachnoid Hemorrhage Rule in patients with acute headache”

http://www.cmaj.ca/content/189/45/E1379.abstract

The Acetaminophen/Ibuprofen Ascendancy

The new hotness of the day is the piece out of JAMA comparing our various oral analgesic treatment options for acute pain the Emergency Department. This relatively mundane line of research has been relatively fertile over the last few years, so, what do we have in store here?

This was a double-blind, placebo-controlled trial of four different analgesic combinations for acute extremity pain. To enter the trial, one of the criteria was receipt of an imaging study, which served twofold: an assumed proxy for more serious injuries and pain, and because it increased ED length-of-stay enough to reduce patients lost to follow-up for their primary outcome. The primary outcome, then, was reduction in pain on a 0 to 10 numerical rating scale at the 2-hour mark, with an interim 1-hour mark recorded as well.

The study drugs were as follows: 400 mg of ibuprofen and 1000 mg of acetaminophen; 5 mg of oxycodone and 325 mg of acetaminophen; 5 mg of hydrocodone and 300 mg of acetaminophen; or 30 mg of codeine and 300 mg of acetaminophen. Approximately 100 patients per arm were targeted from their sample size calculations, and they ultimately randomized 416 into generally similar groups with respect to final diagnoses.

The outcomes are essentially a wash – raising a question of whether there is any advantage to opiate therapy for this indication.  In our beautiful public health tapestry of increasing opiate misuse and addiction, any opportunity to reduce opiate prescribing is important.  There are some reasonable takeaways with respect to the relative efficacy of ibuprofen/acetaminophen, oxycodone/acetaminophen,  hydrocodone/acetaminophen and codeine/acetaminophen combinations, but their clinical relevance is highly questionable considering the doses tested in this study.  This is, unfortunately, essentially a straw-man comparison between an adequate dose of non-opiate analgesia compared with the least-adequate preparation of each of the commonly used combination opiate products.  A proper comparison in patients with severe pain ought to use a more typical maximal dose, which would probably be twice as much of each of the combination opiate products.

There are a few other small oddities relating to this study, of course. As an unavoidable consequence of the study setting, 60% of their study cohort identified as Latino and another 31% identified as black. There are potential genetic differences in pharmacokinetics relating to ethnicity, as well as cultural factors relating to the cohort enrolled at the study site, so the generalization of these data requires some caution. The study protocol states patients were to be asked whether they were satisfied with their pain control and side effects were to be recorded (nausea, vomiting, itchiness, etc.), but these are not reported in the final manuscript or supplement. Finally, these data are also limited, essentially, to sprains, fractures, and contusions. This represents an important slice of outpatients seeking analgesia, but may not be applicable to other types of pain.

Overall, however, this is reasonable evidence to support strategies of combination non-opiate therapy in patients without contraindications to both acetaminophen and ibuprofen.  It should not, however, be offered as evidence of the disutility of commonly used combination opiate preparations.

“Effect of a Single Dose of Oral Opioid and Nonopioid Analgesics on Acute Extremity Pain in the Emergency Department”

https://jamanetwork.com/journals/jama/article-abstract/2661581

Why Are Children Dizzy?

Vertigo presentations in adults are nearly always benign – with cerebral ischemia generally the most worrisome diagnosis in the differential. But, what about children? With a much lower risk for stroke, but also spared the other decay and decrepitude of aging, ought we be more or less concerned?

The short answer: mostly no. However, the etiologies of pediatric vertigo are almost certainly different.

In this short systematic review comprised of 24 studies and 2,726 children, the vast majority of cases resulted from generally benign etiologies. The most common diagnosis was ascribed to “vestibular migraine”, at about a quarter of the cases, followed by a smattering of peripheral vertigo and labyrinthitis-spectrum disorders. Not until diagnostic prevalence approached ~1% of cases did the most serious underlying etiologies begin to manifest, with central nervous system tumors, demyleninating disease, and ototoxic medication effects at the top of the lists of infrequent findings.

The limitations of this analysis include lack of generalizability to the Emergency Department, as several of the included articles are drawn from outpatient subspecialty case series review. A reasonable takeaway from these data, at least, as in adults, is serious underlying etiologies are very infrequently, and isolated vertigo need not be particularly worrisome absent other important neurologic findings.

“The Differential Diagnosis of Vertigo in Children: A Systematic Review of 2726 Cases”
https://www.ncbi.nlm.nih.gov/pubmed/29095392

And, The Safest Pediatric Sedation Drug Is …

Ketamine.

This ought not surprise virtually anyone, considering the vast body of experience physicians have performing safe, effective procedural sedation with ketamine. However, medicine is prone to its dogmatic confirmation bias, so I applaud these authors for this important report.

This is a prospective, observational, multi-center cohort specifically evaluating all episodes of procedural sedation for serious adverse events and important interventions. These authors recorded medication cocktails used for sedation, any adjunctive use of medication, the procedure performed, fasting status, and underlying health risks, and then tracked the outcomes of each procedure performed.

Ultimately, they included 6,295 children and sedation events in this study. The most commonly used sedation medications were ketamine, propofol, and combinations of ketamine, propofol, and fentanyl. Serious events were rare, occurring in about 1% of sedations – and, likewise, so were important interventions. Furthermore, the vast majority of events and interventions were simply temporary use of positive pressure ventilation in response to periods of apnea. Importantly, no patients required intubation or unplanned hospital admission. Oxygen desaturation was tracked separately from serious events and, along with vomiting, occurred in approximately 5% of sedation procedures.

With regard to other contributing factors to serious events or interventions, any deviation from ketamine monotherapy increased such risks. Whether it be combining ketamine with another opiate or benzodiazepine, or whether propofol were used alone or in combination, all increased the risk of serious events a small absolute amount over the baseline. Several figures included in the manuscript describe the various risk factors associated with serious outcomes with generally predictable associations, including increased risks with periprocedural opiate use, and decreased vomiting when ketamine were excluded.

Overall, even though the short answer to the question posed in the title is “ketamine”, the slightly longer answer is “any choice is probably fine”. Even though the relative risks are increased, the absolute risks are small – and the severity of interventions required, despite their labeling, were essentially benign.

“Risk Factors for Adverse Events in Emergency Department Procedural Sedation for Children”
https://www.ncbi.nlm.nih.gov/pubmed/28828486