Thrombolysis and the Aging Brain

The bleeding complications of thrombolysis are well-described, but frequently under-appreciated in the acute setting. Stroke patients often disappear upstairs after treatment in the Emergency Department quickly enough that we rarely see the neurologic worsening associated with post-thrombolysis hemorrhage.

Risk factors for post-tPA ICH are well-known, but often difficult to precisely pin down for an individual patient. This study pools patients from up to 15 studies to evaluate the effect of leukoariosis on post-tPA hemorrhage. Leukoariosis, essentially, is a cerebral small vessel disease likely related to chronic ischemic damage. It has been long-recognized as a risk factor for increased hemorrhage and poor outcome, independent of age at treatment.

In this study, authors pooled approximately 5,500 patients, half of which were identified to have leukoariosis. The unadjusted absolute risk of symptomatic ICH in those without leukoariosis was 4.1%, while the risk of those with was 6.6%. Then, looking at the 2,700 patients with leukoariosis, those with mild disease had an unadjusted absolute risk of 4.0%, compared with 10.2% for those with moderate or severe. Similar trends towards worse functional outcomes were also seen with regards to worsening leukoariosis.

The moral of the story: the baseline health of the brain matters. When discussing the risks, benefits, and alternatives for informed consent with a family, these substantial risks in those patients with leukoariosis should be clearly conveyed with regards to appropriateness of tPA when otherwise potentially indicated.

“Leukoaraiosis, intracerebral hemorrhage, and functional outcome after acute stroke thrombolysis”

http://www.neurology.org/content/early/2017/01/27/WNL.0000000000003605.abstract

Ottawa, the Land of Rules

I’ve been to Canada, but I’ve never been to Ottawa. I suppose, as the capital of Canada, it makes sense they’d be enamored with rules and rule-making. Regardless, it still seems they have a disproportionate burden of rules, for better or worse.

This latest publication describes the “Ottawa Chest Pain Cardiac Monitoring Rule”, which aims to diminish resource utilization in the setting of chest pain in the Emergency Department. These authors posit the majority of chest pain patients presenting to the ED are placed on cardiac monitoring in the interests of detecting a life-threatening malignant arrhythmia, despite such being a rare occurrence. Furthermore, the literature regarding alert fatigue demonstrates greater than 99% of monitor alarms are erroneous and typically ignored.

Using a 796 patients sample of chest pain patients receiving cardiac monitoring, these authors validate their previously described rule for avoiding cardiac monitoring: chest pain free and normal or non-specific ECG changes. In this sample, 284 patients met these criteria, and none of them suffered an arrhythmia requiring intervention.

While this represents 100% sensitivity for their rule, as a resource utilization intervention, there is obviously room for improvement. Of patients not meeting their rule, only 2.9% of this remainder suffered an arrhythmia – mostly just atrial fibrillation requiring pharmacologic rate or rhythm control. These criteria probably ought be considered just a minimum standard, and there is plenty of room for additional exclusion.

Anecdotally, not only do most of our chest pain patients in my practice not receive monitoring – many receive their entire work-up in the waiting room!

“Prospective validation of a clinical decision rule to identify patients presenting to the emergency department with chest pain who can safely be removed from cardiac monitoring”
http://www.cmaj.ca/content/189/4/E139.full

Can We Trust Our Computer ECG Overlords?

If your practice is like my practice you see a lot of ECGs from triage. ECGs obtained for abdominal pain, dizziness, numbness, fatigue, rectal pain … and some, I assume, are for chest pain. Every one of these ECGs turns into an interruption for review to ensure no concerning evolving syndrome is missed.

But, a great number of these ECGs are read as “Normal” by the computer – and, anecdotally, are nearly universally correct.  This raises a very reasonable point as to question whether a human need be involved at all.

This simple study tries to examine the real-world performance of computer ECG reading, specifically, the Marquette 12SL software. Over a 16-week convenience sample period, 855 triage ECGs were performed, 222 of which were reported as “Normal” by the computer software. These 222 ECGs were all reviewed by a cardiologist, and 13 were ultimately assigned some pathology – of which all were mild, non-specific abnormalities. Two Emergency Physicians also then reviewed these 13 ECGs to determine what, if any, actions might be taken if presented to them in a real-world context. One of these ECGs was determined by one EP to be sufficient to put the patient in the next available bed from triage, while the remainder required no acute triage intervention. Retrospectively, the patient judged to have an actionable ECG was discharged from the ED and had a normal stress test the next day.

The authors conclude this negative predictive value for a “Normal” read of the ECG approaches 99%, and could potentially lead to changes in practice regarding immediate review of triage ECGs. While these findings have some limitations in generalizability regarding the specific ECG software and a relatively small sample, I think they’re on the right track. Interruptions in a multi-tasking setting lead to errors of task resumption, while the likelihood of significant time-sensitive pathology being missed is quite low. I tend to agree this could be a reasonable quality improvement intervention with prospective monitoring.

“Safety of Computer Interpretation of Normal Triage Electrocardiograms”
https://www.ncbi.nlm.nih.gov/pubmed/27519772

Discharged and Dropped Dead

The Emergency Department is a land of uncertainty. Generally a time-compressed, zero-continuity environment with limited resources, we frequently need to make relatively rapid decisions based on incomplete information. The goal, in general, is to treat and disposition patients in an advantageous fashion to prevent morbidity and mortality, while minimizing the costs and other harms.

The consequence of this confluence of factors leads, unfortunately, to a handful of patients who meet their unfortunate end following discharge. A Kaiser Permanente Emergency Department cohort analysis found 0.05% died within 7 days of discharge, and identified a few interesting risk factors regarding their outcomes. This new article, in the BMJ, describes the outcomes of a Medicare cohort following discharge – and finds both similarities and differences.

One notable difference, and a focus of the authors, is that 0.12% of patients discharged from the Emergency Department died within 7 days. This is a much larger proportion than the Kaiser cohort, however, the Medicare population is obviously a much older cohort with greater comorbidities. Then, they found similarities regarding the risks for death – most prominently, “altered mental status”. The full accounting of clinical features is described in the figure below:


Then, there were some system-level factors as well. Potentially, rural emergency departments and those with low annual volumes contributed in their multivariate model to increased risk of death. This data set is insufficient to draw any specific conclusions regarding these contributing factors, but it raises questions for future research. In general, however, this is interesting – and not terribly surprising data – even if it is hard to identify specific operational interventions based on these broad strokes.

“Early death after discharge from emergency departments: analysis of national US insurance claims data”
http://www.bmj.com/content/356/bmj.j239

The Intravenous Contrast Debate

Does intravenous contrast exposure increase the likelihood of developing renal insufficiency? The consensus opinion has been, generally, “yes”. However, evaluated under a closer lens, it is apparent some of these data come from high-dose use during angiography, from exposure to high-osmolar contrast material not routinely used in present day, and weak evidence from observational cohort studies.

The modern take is, increasingly, potentially “no”. However, it is virtually impossible to conclusively study the effect of intravenous contrast exposure. A prospective, controlled trial would require patients for whom a contrast study was believed important to their medical care be randomized to not receiving the indicated study, leading to all manner of potential harms. Therefore, we are reduced to looking backwards and comparing patients undergoing a contrasted study with those who do not.

This study is probably the best style of this type of evidence we are going to get. This is a propensity-matched analysis of patients undergoing contrast CT, non-contrast CT, and those not undergoing CT at all. Between 5,000 and 7,000 patients comprised each cohort, and these were stratified by baseline comorbidities, medications administered, illness severity indicators, and baseline renal function. After these various adjustments and weighting, the authors did not observe any effect on subsequent acute kidney injury relating to the administration of intravenous contrast – limited to patients with a creatinine of 4.0 mg/dL or below at baseline.

I think this is basically a reasonable conclusion, given the approach. There has been a fair bit of observational content regarding the risk of AKI after a contrast CT, but it is impossible separate the effect of contrast from the effects of the concurrent medical illness requiring the contrast CT. Every effort, of course, should be taken to minimize the use of advanced imaging – but in many instances, the morbidity of a missed diagnosis almost certainly outweighs the risk from intravenous contrast.

“Risk of Acute Kidney Injury After Intravenous Contrast Media Administration”
http://www.annemergmed.com/article/S0196-0644(16)31388-9/abstract

Pediatric Lactate & Sepsis

Some syndicated media has “Shark Week”. We have Sepsis Week!

The current generation of sepsis care is defined not just by our quixotic quest for simplified early warning tools, but also, more than anything, by lactate levels. In someways, lactate is our friend – no more central catheter placement solely for measurement of central venous oxygenation. However, the ease of use of checking a lactate level also means we apply it indiscriminately. The lactate has become the D-dimer of infection – increasingly weakly predictive, the more we rely upon it.

This is a snapshot of the performance of lactate levels in pediatric sepsis. This is an observational registry of patients evaluated in the Emergency Department of a pediatric hospital, consisting of 1,299 patients in whom clinically suspected sepsis resulted in a lactate order. These authors hypothesized that, as in adults, a lactate level of 36mg/dL (4mmol/L) would portend increased mortality.

And, naturally, they were correct. However, its predictive value was virtually nil. There were 103 patients with lactate elevated above their cut-off and 1,196 below. Only 5 of the 103 patients elevated lactate suffered 30-day mortality. Then, of the 1,196 below the cut-off, 20 suffered 30-day mortality. A mortality of 4.8% is higher than 1.7%, but the sensitivity is only 20% – and the specificity of 92.3% with such a low prevalence of the primary outcome means over 95% of elevated lactate levels are “false positives”.

There are some limitations here, however, that could have a substantial effects on the outcomes. There is a selection bias inherent to eligibility in which lactates were likely ordered only on the most ill-appearing patients. The effect of this would be to improve the apparent performance characteristics of the test in the study population. However, then, it is likely the patients with elevated lactate levels received more aggressive treatment than if the treating clinicians were blinded to the result. The effect of this would be a mortality benefit in the population with elevated lactate, worsening the apparent test characteristics.

But, hairs split aside, these pediatric results are grossly similar to those in adults. An elevated lactate is a warning signal, but should hardly be relied upon.

“Association Between Early Lactate Levels and 30-Day Mortality in Clinically Suspected Sepsis in Children”
https://www.ncbi.nlm.nih.gov/pubmed/28068437

A qSOFA Trifecta

There’s a new sepsis in town – although, by “new” it’s not very anymore. We’re supposedly all-in on Sepsis-3, which in theory is superior to the old sepsis.

One of the most prominent and controversial aspects of the sepsis reimagining is the discarding of the flawed Systemic Inflammatory Response Syndrome criteria and its replacement with the Quick Sequential Organ Failure Assessment. In theory, qSOFA replaces the non-specific items from SIRS with physiologic variables more closely related to organ failure. However, qSOFA was never prospectively validated or compared prior to its introduction.

These three articles give us a little more insight – and, as many have voiced concern already, it appears we’ve just replaced one flawed agent with another.

The first article, from JAMA, describes the performance of qSOFA against SIRS and a 2-point increase in the full SOFA score in an ICU population. This retrospective analysis of 184,875 patients across 15 years of registry data from 182 ICUs in Australia and New Zealand showed very little difference between SIRS and qSOFA with regard to predicting in-hospital mortality. Both screening tools were also far inferior to the full SOFA score – although, in practical terms, the differences in adjusted AUC were only between ~0.69 for SIRS and qSOFA and 0.76 for SOFA. As prognostic tools, then, none of these are fantastic – and, unfortunately, qSOFA did not seem to offer any value over SIRS.

The second article, also from JAMA, is some of the first prospective data regarding qSOFA in the Emergency Department. This sample is 879 patients with suspected infection, followed for in-hospital mortality or ICU admission. The big news from this article is the AUC for qSOFA of 0.80 compared with the 0.65 for SIRS or “severe sepsis”, as defined by SIRS plus a lactate greater than 2mmol/L. However, at a cut-off of 2 or more for qSOFA, the advertised cut-off for “high risk”, the sensitivity and specificity were 70% and 79% respectively.

Finally, a third article, from Annals of Emergency Medicine, also evaluates the performance characteristics of qSOFA in an Emergency Department population. This retrospective evaluation describes the performance of qSOFA at predicting admission and mortality, but differs from the JAMA article by applying qSOFA to a cross-section of mostly high-acuity visits, both with and without suspected infection. Based on a sample of 22,350 ED visits, they found similar sensitivity and specificity of a qSOFA score of 2 or greater for predicting mortality, 71% and 74%, respectively. Performance was not meaningfully different between those with and without infection.

It seems pretty clear, then, this score doesn’t hold a lot of value. SIRS, obviously, has its well-documented flaws. qSOFA seems to have better discriminatory value with regards to the AUC, but its performance at the cut-off level of 2 puts it right in a no-man’s land of clinical utility. It is not sensitive enough to rely upon to capture all patients at high-risk for deterioration – but, then, its specificity is also poor enough using it to screen the general ED population will still result in a flood of false positives.

So, unfortunately, these criteria are probably a failed paradigm perpetuating all the same administrative headaches as the previous approach to sepsis – better than SIRS, but still not good enough. We should be pursuing more robust decision-support built-in to the EHR, not attempting to reinvent overly-simplified instruments without usable discriminatory value.

“Prognostic Accuracy of the SOFA Score, SIRS Criteria, and qSOFA Score for In-Hospital Mortality Among Adults With Suspected Infection Admitted to the Intensive Care Unit”

http://jamanetwork.com/journals/jama/article-abstract/2598267

“Prognostic Accuracy of Sepsis-3 Criteria for In-Hospital Mortality Among Patients With Suspected Infection Presenting to the Emergency Department”

http://jamanetwork.com/journals/jama/fullarticle/2598268

“Quick SOFA Scores Predict Mortality in Adult Emergency Department Patients With and Without Suspected Infection”

http://www.annemergmed.com/article/S0196-0644(16)31219-7/fulltext

Another Step in Antibiotics for Appendicitis

Antibiotics are unnecessary! No, antibiotics are great! No, we give too many antibiotics! It’s getting hard to keep track of which conditions we’re giving and withholding antibiotics for these days.

This article is a teaser for more evidence to come regarding strategies for managing appendicitis without surgical intervention. We’ve seen a few trials already, with essentially unconvincing results in either direction. A large trial regarding an antibiotics-first strategy in an adult population was criticized for using open surgical technique rather than laproscopic – and the one-year failure rate was still rather high. However, a pilot report in a pediatric population probably demonstrates an antibiotic-first strategy is still a reasonable option to present in shared decision-making.

This is a pilot project describing the initial results and feasibility outlook for an antibiotics-first protocol for appendicitis. In this protocol, patients randomized to an antibiotics-first strategy received an intravenous dose of ertapenem in the Emergency Department, were eligible for discharge directly from the Emergency Department, returned for a second dose of ertapenem the next day, and then completed an 8-day course of oral cefdinir and metronidazole.

In their pilot, 42 patients were screened and 30 patients consented for randomization. Of these, 15 were adults and 1 was a pediatric patient. Of the 15 adults, 14 felt well enough for discharge after initial Emergency Department observation. The pediatric protocol called for in-hospital observation regardless of symptoms at presentation.

The results are generally of lesser consequence than the effectiveness of this pilot demonstrating the feasibility of the protocol, and the yield at which patients could be enrolled for a larger trial. There were a couple instances of recurrent appendicitis in the antibiotics-first cohort, one of which was successfully treated with antibiotics a second time. There were a couple surgical complications in the surgery cohort. Costs and overall quality of life scores favored the antibiotics-only group, obviously – but, again, this sample is small enough none of these outcomes have been measured with reliable accuracy or precision.

I think it is reasonable to expect an antibiotics-first strategy to eventually take root as part of acceptable medical practice. However, I suspect this transition will be slow in coming – and more data would be quite helpful in determining any specific risks for antibiotic strategy failures.

“Antibiotics-First Versus Surgery for Appendicitis: A US Pilot Randomized Controlled Trial Allowing Outpatient Antibiotic Management”

https://www.ncbi.nlm.nih.gov/pubmed/27974169

Shenfu!

I will readily admit I am stepping outside the bounds of my expertise with this post – with respect to the “shenfu injection” and its effects on physiology. The authors describe shenfu as “originated from Shenfu decoction, a well-known traditional Chinese formulation restoring ‘Yang’ from collapse, tonifying ‘Qi’ for relieving desertion”. More specifically, from a physiologic standpoint: “Ginsenosides and aconite alkaloids are the main active ingredients in Shenfu. Ginsenosides are the determinant contributor to the vasodilator benefit of Shenfu, whereas the alkaloids play a vital role in the cardiac electrophysiological effect of Shenfu by blocking ion channels”. In China, a pharmacologic shenfu distillate is used routinely to treat sepsis and septic shock as a 100mL daily injection – and this is a placebo-controlled trial endeavoring to demonstrate its efficacy.

At face value, the trial appears reasonable – a targeted enrollment of 160 patients with a goal of detecting a 20% difference in mortality at 28-days, based on an expected overall mortality of 40%. Their primary outcome, however, were the co-primary outcomes of “length of ICU stay, the duration of vasopressor use, illness severity, and the degree of organ dysfunction.” A proper study, of course, has a single primary outcome – and, considering the study was powered for a mortality difference, this patient-oriented outcome probably ought to have been made primary.

Regardless, from the results presented here, it is reasonable to suggest this is promising and worthy of additional evaluation. Several outcomes – ICU LOS, APACHE II score, and duration of vasopressor us – reached statistical significance favoring the intervention. The mortality outcome did not meet statistical significance with the intervention at 20.5% and the placebo at 27.8%. However, an absolute mortality improvement of 7.3% is nothing to sneeze at – and I would be happy to see more work performed to replicate or generalize these results.

“Shenfu injection for improving cellular immunity and clinical outcome in patients with sepsis or septic shock”

https://www.ncbi.nlm.nih.gov/pubmed/28029485