Why Patients Stop Waiting

“Left without being seen” rates are tracked by every medical director – with fluctuations in this rate frequently resulting in knee-jerk interventions.  However, it’s not necessarily well-understood which patients are actually at risk for LWBS, and why they leave.

This is a Wharton professor who took an operations modeling look at ED LWBS rates, stratified by Emergency Severity Index.  Using timestamp data from 150,000 ED visits, this approach derived four reasonable conclusions:

  • For patients with moderate severity, observing additional patients in the queue lead to increased abandoment.
  • Additional arrivals into the waiting room increased abandonment, while departures decreased abandonment.
  • Watching an arrival “queue-jump” due to higher acuity level increased chance of abandonment.
  • Initiation of diagnostic testing – such as triage protocols – reduced abandonment even if overall wait time was unchanged.

Overall, it’s fascinating to see an somewhat agnostic perspective on the influences on waiting room patients.  The entire report is available as PDF directly from Wharton.

“Waiting Patiently: An Empirical Study of Queue Abandonment in an Emergency Department”
http://knowledge.wharton.upenn.edu/papers/download/06182013_Terwiesch-paper.pdf

Continuing Debate Over Thrombolysis

The debate spurred by Jeanne Lenzer’s report on conflicts of interest by guideline writers continues.

Drs. Grotta, Hoffman, Saver, Newman, Solomon, Klauer, Marchidann, Sandercock, Quinn and myself all contribute to the back-and-forth regarding the future of tPA in stroke, while Dr. Geisler and Bracken respond regarding the data for use of steroids in spinal cord trauma.  Some truly amazing responses by leading physicians on both sides of the issues.
“Why we can’t trust clinical guidelines”

A Break in Massive Transfusion Evidence

The standard of care in trauma centers for massive transfusion in the setting of trauma has rapidly evolved to a fixed-ratio protocol, attempting to provide a physiologically balanced 1:1:1 mixture of PRBCs, FFP, and platelets.  The evidence upon which this is based stems from observational battlefield data, as well as retrospective trauma service registries.  However, as I’ve noted before (parroted, really, from folks smarter than me), these retrospective reviews are prone to survivorship bias – folks too sick to thaw FFP in time will die, and appear to reflect increased mortality association with not receiving FFP.

There is a large, multi-center prospective trial underway attempting to determine the optimal ratio of blood products – testing PRBC:FFP:platelets in 1:1:1 vs. 2:1:1 – because there are concerns especially with complications & costs associated with increasing FFP and platelet transfusions.  This article describes a single-center, prospective study of the feasibility of even implementing a 1:1:1 ratio, given the difficulty of having plasma products on hand – but has the interesting side effect of providing some rather interesting and unexpected comparative outcomes data.

These authors enrolled, over a two year period, 78 patients from a pool of 203 screened for eligibility, and randomized them in unblinded fashion to 1:1:1 fixed ratio transfusion or their “usual care” control.  “Usual care” for this institution consists of transfusion product balance guided by laboratory results (Hgb, INR, PTT, and fibrinogen).  They found, as the primary outcome of their study, that the 1:1:1 ratio was feasible – but resulted in over twice as many wasted units of FFP (22% vs. 10% of thawed units).

The secondary outcomes reported include coagulation monitoring targets and mortality data.  There was, for the most part, no statistically significant difference in any reported outcome.  The coagulation monitoring targets all had p-values ranging from 0.4 to 0.8 and, truly, are not different.  The mortality data, on the other hand, showed 29.7% mortality in the 1:1:1 group and 9.4% mortality in the usual care group – 20.3% difference (95% CI 2.5 to 38.2).

This is not practice-changing evidence.  It’s a small sample size data coming from secondary outcomes in a feasibility study.  But, regardless, it is very interesting to see.

“Effect of a fixed-ratio (1:1:1) transfusion protocol versus laboratory-results–guided transfusion in patients with severe trauma: a randomized feasibility trial”
www.ncbi.nlm.nih.gov/pubmed/23857856

We Can Cath You

Despite all the bad press the United States healthcare system gets, there is one incontrovertible truth: we’re the leading authority in cardiac catheterization.  If practice makes perfect, no one is closer to perfection than us.  Let’s not tarnish our procedural expertise with silly notions of appropriateness, shall we?

These authors from Canada – clearly, with a hometown bias – undertake a retrospective registry review comparing cardiac catheterizations from Ontario and New York state.  They observe New York state performs twice as many cardiac catheterizations per capita and wonder – are Americans twice as unhealthy, or are our cardiologists just twice as skilled at profiting from cardiac catheterization?

The review excludes patients known to have obstructive CAD, shock, recent MI, or unstable angina – which makes this essentially an elective cath cohort.  Overall, 30.4% of patients in New York were diagnosed with obstructive CAD on catheterization, compared with 44.8% in Ontario.  This higher diagnostic yield is unsurprising, considering the New York cohort had, by far, a lower predicted probability of obstructive CAD.  This reasonably supports the follow-up author conclusion Ontario does a superior job at selecting patients for the procedure.

I can’t believe they would imply U.S. healthcare delivery is somehow inefficient.

As Outside Hospital notes: We can cath you.

“Prevalence and Extent of Obstructive Coronary Artery Disease Among Patients Undergoing Elective Coronary Catheterization in New York State and Ontario”
www.ncbi.nlm.nih.gov/pubmed/23839750‎

Rewriting The Inconvenient Rules on tPA

Stroke neurologists spent the first decade of the tPA era explaining away negative studies and excluding patients from meta-analyses secondary to “protocol violations”, emphasizing that strict adherence to NINDS criteria supports clear benefit to tPA.  As we’ve seen, however, the new push is rather to take back the night, expanding the treatment window out to 4.5 hours, 6 hours, the extreme elderly, and all sorts of previously excluded patients.

But we don’t need evidence to do that – we just need Genentech to fly a select group of experts out to a meeting so they can issue a report in support of throwing out the previous exclusion criteria: The Re-examining Acute Eligibility for Thrombolysis (TREAT) Task Force.  The astute reader might already recognize the prevailing bias just from the acronym for their group.

This is the first report, specifically addressing “rapidly improving stroke symptoms”.  I agree with the general concept – if a severely disabled patient has shown some improvement, but are still profoundly limited – it is reasonable to consider them a candidate for treatment.  The original NINDS group was interested primarily in excluding TIAs – folks they thought would go on to have zero residual disability.  However, these authors are already over the cliff on treating nearly every improving and mild stroke symptom.  In true flabbergasting lunacy, a majority of surveyed participants wouldn’t even randomize patients who rapidly improved to an NIHSS of 0 into a clinical trial comparing outcomes with tPA vs. placebo – unless the patient had zero disability, they would simply treat.

I’m no longer surprised by such extreme viewpoints.  After all, here are the authors’ significant COIs:

Broderick: Genentech, Novo Nordisk, Schering Plow, PhotoThera
Grotta: none
Kasner: Genentech
Khatri: Genentech (for a study called “Potential for rtPA to Improve Strokes with Mild Symptoms”!), Penumbra, Jannsen, Lake Biosciences, Medical Dialogues
Levine: Genentech
Meyer: Genentech, The Medicines Company
Panagos: Genetench
Romano: Genentech
Scott: none
Kim: none

“Logistical support for the in-person meeting was provided by Infusion, An InforMed Group Company, who was funded by Genentech, Inc. Travel funds (lodging, airfare, and food) to this meeting were also provided to some participants by Genentech, Inc.”

I cannot fathom why stroke neurologists are still befuddled by continued skepticism towards their tPA recommendations when they persist in pumping out recommendations saturated by bias.

“Review, Historical Context, and Clarifications of the NINDS rt-PA Stroke Trials Exclusion Criteria : Part 1: Rapidly Improving Stroke Symptoms”
www.ncbi.nlm.nih.gov/pubmed/23847249‎

Dermabond … The Tongue?

In an addition to the pages of possibly brilliant innovations, this is a case report of an attempt to use 2-octyl cyanoacrylate (Dermabond) on the tongue.  The authors document a gaping laceration to the tongue on a pediatric patient – and a family that refused consent for sedation and suture repair.  So, even though Dermabond is not recommended for use on mucosal surfaces – onward!

After extensive drying, the authors document secure and successful closure.  However, at the 24 hour wound check, the glue had begun to detach, requiring removal of the first application and a second treatment.  No further complications were encountered, and a 14-day revisit showed complete resolution of the injury.

I agree with these authors – the tongue is not a trivial repair, particularly in the unruly youth.  The risk is probably minimal – although, the tissue adhesive could be problematic if it comes detached.  The laceration itself is documented in images – and, while it’s possible the still images don’t tell the story, I’m not sure it necessitated any repair at all.

I appreciate the novel use, but it’s unclear if this is a technique worth much enthusiasm in revisiting.

“Pediatric Tongue Laceration Repair Using 2-Octyl Cyanoacrylate (Dermabond)”
www.ncbi.nlm.nih.gov/pubmed/23827167

23.4% – The Next Great Hypertonic Saline

Mannitol and hypertonic saline are the most commonly used medications to ward off the catastrophic complications of malignant increased intracranial pressure.  Hypertonic saline, in my experience, has typically been 3%, but there are multiple different concentrations in use.

These authors perform a systematic review and meta-analysis of 23.4% saline.  After all, the theory goes, a more osmotically powerful concentrated solution will exert greater physiologic effects.  They identified 11 articles, six of which they included in a meta-analysis to identify an effect size for intracranial pressure reduction.  Using the pooled data, the measured effect was a 55.6% (CI 44%-67%) decrease in ICP within 60 minutes.  Their systematic review uncovered few adverse effects of 23.4% – transient hypotension and rare hemolytic anemia – and even reported acute reversal of herniation syndromes with good neurologic outcomes.

There is a ton of heterogeneity between studies – both in dosing of 23.4% saline, co-administration of mannitol, and underlying pathophysiology of ICP.  Most studies are also tiny, ranging between 8 and 68, and either retrospective reviews or prospective non-random selection.  Many studies did not report patient-oriented outcomes, so it’s hard to truly compare this practice to the current standard of care.

That being said, it seems interesting for potential use as “rescue” therapy when the alternative is permanent cerebral asphyxiation – and further study is needed to describe the appropriate (if any) population for use.

For reference: the salinity of seawater is about 3.5%, the Great Salt Lake varies between 5-27%, and the Dead Sea is approximately 33.7%.  Definitely not appropriate for a peripheral intravenous line!

“High-Osmolarity Saline in Neurocritical Care: Systematic Review and Meta-Analysis”

Copeptin & Publication Bias

There is a phenomenon in the medical literature called publication bias.  It results from two phenomena – authors are more likely to submit the results of trials with positive results, and editors tend to publish articles with positive results.  This results in all sorts of flaws with regard to the composition of the scientific literature, and exerts a particularly troubling hidden effect in meta-analyses and systematic reviews.

I comment upon this in the context of yet another cardiovascular assay article that has – essentially – negative results that are spun to be positive.  Copeptin, as I’ve discussed before, is another acute phase indicator of myocardial demise – but sacrificing specificity for sensitivity.  These authors combine copeptin with hs-TnT for evaluation of chest pain in the Emergency Department, and report several favorable findings in their abstract and the text of their discussion.

In reality only one of the findings they focus on is truly positive – an increase in sensitivity from 76% to 96%.  The NPV increases from 95% (90.4-98.3) to 98.9% (94.2-100) and is not truly a positive result.  More importantly, the authors report copeptin “adds incremental value” – when the area under the receiver operating curve is statistically identical at 0.886 (0.85-0.922) vs. 0.928 (0.89-0.967).

Perhaps copeptin will someday be proven to add true clinical value in an algorithm for the rapid assessment of chest pain in the Emergency Department.  This paper, however, seems to have exaggerated the positivity of its results.  Considering the spate of other recent “positive” copeptin articles – I foresee systematic reviews and meta-analyses of the test characteristics further perpetuating any unremarkable reported advantage in test characteristics.

“Early rule out of acute myocardial infarction in ED patients: value of combined high-sensitivity cardiac troponin T and ultrasensitive copeptin assays at admission”
http://www.ncbi.nlm.nih.gov/pubmed/23816196

Subsegmental Pulmonary Emboli Are Just As Deadly?

A couple weeks back, I posted my algorithm regarding (not) evaluating patients with chest pain for pulmonary embolism.  As has been written multiple times – most recently in this BMJ article – the evidence for overdiagnosis is rather overwhelming, and I’m trying to come up with strategies to do my small part to reduce it.

These Dutch authors, however, perform a retrospective analysis of prospectively-collected data and conclude “patients with symptomatic [subsegmental pulmonary embolism] appear to mimic those with segmental or more proximal PE as regards their risk profile and short term clinical course.”  If correct, it would imply there are no “clinically insignificant” PE – which flies in the face of our evidence of overdiagnosis.

These authors found 116 SSPE, 632 proximal PE, and 2980 patients without PE.  They found the folks with proximal and SSPE, by almost every measure, had significantly more thromboembolic risk factors – particularly malignancy – than the folks without PE.  Unsurprisingly, given the identical VTE risk profile between proximal and SSPE, there was essentially no difference in rate of recurrent VTE in the proximal and SSPE groups during 3 month follow-up – 4 patients (3.6%) with SSPE and 14 patients (2.5%) for proximal PE.  Given an absence of risk factors, the patients without PE at baseline had only 1.1% incidence of VTE during the follow-up period.  For all-cause mortality, the risk was 10.3% for SSPE, 6.3% for proximal PE, and 5.4% for patients without PE at baseline.  1.6% of both the SSPE and proximal PE group suffered major bleeding complications from anticoagulation.

So, there is some truth to the authors’ conclusion that SSPE has a clinical course similar to proximal PE.  However, the clinical significance of SSPE is almost certainly confounded by co-occurrence of comorbid conditions.  It is reasonable to suggest true PE and SSPE are poor prognostic indicators of background disease burden, and not the salient pathologic diagnosis.  Anticoagulation may not be the most important course of action; rather, identifying and treating the underlying cause, if present.

This article ought not support any argument regarding the necessity of diagnosis and treatment of sub-segmental PE, except in the context of a broader approach to a patient.

“Risk profile and clinical outcome of symptomatic subsegmental acute pulmonary embolism”
www.ncbi.nlm.nih.gov/pubmed/23736701

Happy Independence Day!

I ought to have posted this piece regarding firework injuries on Wednesday to get folks in the mood – but, better late than never!

This is an entertaining little experiment published in JAMA investigating the mechanism of ocular trauma from fireworks.  These authors created a setup in which a cadaveric eye was suspended in a network of sensors – and then concussive charges and fireworks were exploded at various distances.

Based on their experiments, these authors conclude most of the ocular injury potential is superficial and results from flying debris, rather from any explosive pressure wave.  Fascinating little study!

“Mechanisms of Eye Injuries From Fireworks”
www.ncbi.nlm.nih.gov/pubmed/22760285