Predicting Deterioration After Admission

This is a decidedly unsexy topic that I guarantee your Medical Director or QI committee cares about a lot.  Particularly where I work, we occasionally have a prolonged boarding event, the patient isn’t reassessed in a certain time frame, the patient is transported out of the ED – and they arrive on the floor or step-down and Rapid Response is called for an unanticipated escalation in care.

This is apparently a bigger deal in the United Kingdom, because it is recommended by their government hospital body to employ a risk-stratification system to predict patient deterioration.  These two articles discuss the derivation in the UK and the validation in Canada of the “ViEWS” score, which is named in part by from their electronic health record that stores their physiologic data.  The general gist of the system is that the authors of the first article derived a score incorporating pulse, respiratory rate, temperature, systolic BP, O2 saturation, whether patient was on oxygen, and a measure of CNS alertness.  They then compare it do several other scoring systems and amazingly enough, the scoring system they derive – using the system from the company the authors’ wives work for and in which they own shares of stock – works better than the other systems.

An abbreviated version of this is put into validation at a Canadian hospital that does not use any of the equipment, or have any financial conflict of interests.  They found equally good results – which, in summation they give as four risk-stratification groups:
 – < 3 points: 65% of all patients, only 0.02% died within 48 hrs.
 – 3-6 points: 28% of all patients, 0.41% died within 48 hrs.
 – 7-10 points: 6% of all patients, 3% died within 48 hrs.
 – >11 points: 0.7% of all patients, 13.8% died within 48 hrs.

So, yes, we all can probably look at the patients scoring >11 and know they’re sick without a scoring system.  However, this might be a model to look at with nursing staff to help change the parameters for floor beds or to reassess which patients can be downgraded in order to free up more intensive resources upstairs.  Just don’t necessarily buy the product being hawked by the original authors.

“ViEWS—Towards a national early warning score for detecting adult inpatient deterioration.”
www.ncbi.nlm.nih.gov/pubmed/20637974

“Validation of an abbreviated VitalpacTM Early Warning Score (ViEWS) in 75,419 consecutive admissions to a Canadian Regional Hospital”
www.ncbi.nlm.nih.gov/pubmed/21907689

Medication Errors During Resuscitation

According to previous literature from 2002, up to 19% of medication doses are administered in error to hospitalized patients.  Presumably, we’ve improved.

Apparently, we haven’t.  This is a prospective observational study by pharmacists in Pittsburgh who observed the inpatient Medical Emergency Team in operation – which in this instance, was a physician-led team with “full” critical care capabilities, as opposed to their non-physician Rapid Response Team.  They observed medication administration during 50 of these calls and found that there were 1.6 errors per medication administration.  Yes, they really observed more than one error per dose – but 66% of those issues involved aseptic technique.  Subtracting those, they observed an error merely every other dose.  46% were prescribing errors, 28% administration technique, 14% mislabeling, 10% preparation, and 2% improper doses.  The authors eventually conclude that 14% of the total non-aseptic errors were truly harmful, not just “errors”.

Despite the small sample size, I think it’s a fair assessment that “medical emergency” situations can be chaotic and error-prone – and we still have a ways to go to implement systemic changes to prevent errors.

In the end, the pharmacists’ solution is – more pharmacists.  Hmmm….

“Medication Errors During Medical Emergencies in a Large, Tertiary Care, Academic Medical Center”
www.ncbi.nlm.nih.gov/pubmed/22001000

Dabigatran Worsens/Does Not Worsen Bleeding

Stroke and Circulation are both Journals under the umbrella of the American Heart Association.  So, when they publish articles that come to contrasting conclusions, I find that entertaining.

Both of these articles are mouse models of bleeding on dagibatran, C57BL/6 or CD-1 mice.  Sadly, they are frighteningly complex in their adjustments and statistical analyses – which means it defeats my ability to concisely summarize the findings and methods.

In short, one of these articles looks at intracranial hemorrhage after collagenase injection for mice receiving several different doses of oral dabigatran, and compare it to controls, warfarin, lepirudin, fondaparinux, and heparin.  It appears, and the author’s final conclusion is, that dabigatran is the least harmful of all anticoagulants – about halfway between controls and the other anticoagulants.  They also shoot the mice with lasers in another portion of the study, and dabigatran “wins” that as well.

The other article looks at trying to reverse dabigatran – which, if you recall the human study I posted a few weeks back, was not successful in humans.  However, human trials were all surrogate markers of bleeding as measured by laboratory measurements of clotting.  What entertains me is, in contrast to the other study, these authors have no trouble inducing bleeding and significant ICH formation with dabigatran.  In any event, once the mice were adequately bleeding, the authors compared prothrombin concentrate complexes (specifically, Beriplex), FFP, and FVIIa for treatment of ICH 30 minutes after induced injury with collagenase.  Happily, PCCs, in a dose-dependent manner, attenuated the induced ICH, while the others failed.

So, perhaps this “novel, reversible” anticoagulant has a treatment option for life-threatening bleeding.  Human confirmation, at least case reports, needed.

“Anticoagulation With the Oral Direct Thrombin Inhibitor Dabigatran Does Not Enlarge Hematoma Volume in Experimental Intracerebral Hemorrhage”
http://circ.ahajournals.org/content/early/2011/09/11/CIRCULATIONAHA.111.035972.abstract

“Hemostatic Therapy in Experimental Intracerebral Hemorrhage Associated With the Direct Thrombin Inhibitor Dabigatran”


Sodium Polystyrene Sulfonate For Lithium Toxicity

This one is for @drsamko, thanks to his tweet yesterday.  


The most recent of 19 articles in pubmed for the search “sodium polystyrene sulfonate lithium”, a retrospective cohort review looking at the use of SPS in the treatment of lithium toxicity.  Given that lithium and potassium are similarly charged cations, multiple animal studies evaluated its use in lithium overdose, but only case reports in humans.  These authors reviewed 9 years of cases at their institutions, two hospitals in Montreal, Canada, for the effect on lithium serum half-life between patients prescribed SPS vs. patients who were not prescribed SPS.


They only looked at chronic overdoses admitted for management – 90 patients, 72 chronic, 48 had data points to properly evaluate the half-life.  36 received “standard treatment” and 12 were prescribed SPS.  The authors don’t well-describe the standard treatment group, and don’t indicate whether any received hemodialysis – but I get the impression the treatment for chronic toxicity only employs HD on rare occasions of renal failure.  Of the 12 that received SPS, most simply received IV hydration and observation in addition to SPS – and one received hemodialysis due to renal failure.  Half-life of lithium in the controls was 43 hours compared to 20.5 hours in the SPS-receiving group.


SPS isn’t totally benign – there was mild hypokalemia in half their treatment population – and in rare cases it causes intestinal necrosis.  And, considering chronic lithium toxicity generally has a benign course, you could go either way.  You can certainly argue that decreased hospital length-of-stay is a significant financial and health benefit and justify giving it, though, so it’s worth knowing about.


Successful treatment of lithium toxicity with sodium polystyrene sulfonate: a retrospective cohort study”
www.ncbi.nlm.nih.gov/pubmed/19842945

Novel Ischemia Prediction from CCTA

One of the arguments against CCTA is that it only describes coronary anatomy – and has no demonstrated clinical predictive value regarding whether the observed lesions are flow-limiting or potentially related to anginal symptoms.  This study develops a computational fluid dynamics model that attempts to predict flow through coronary stenoses seen on CCTA.

Korea, Latvia, and California come together to evaluate 103 patients in a multicenter trial in which patients with suspected CAD underwent CCTA, invasive coronary angiography, and fractional flow reserve measurement.  They used only 256 and 64-slice scanners for CCTA, and CAD was quantified as none, mild (0-49%), moderate (50-70%), and severe (>70%).  Patients then underwent invasive coronary angiography where ischemia-related flow-limitation was defined as a fractional flow reserve of < 0.80.  The study group then developed a method of deriving the FFR from CCTA data, and compared it to the actual measurements from invasive coronary angiography using the same threshold value.

The conclusions from this article depend what takeaways you’re looking for.  On one hand, the FFR-CT method was pretty decent – 87.9% sensitive and 82.2% specific regarding their definition of ischemia-causing lesions.  The other real takeaway is that CCTA has abysmal performance at the threshold typically used in the CCTA studies of >50% stenosis.  Their calculated +LR for CCTA stenoses >50% was only 1.51 in the setting of a specificity of 39.6%.  To me, another nail in the coffin showing CCTA is the d-Dimer of CAD, leading to a ton of unnecessary testing.

Considering it took them 5(!) hours to generate the FFR-CT measurement based on Newtonian fluid and Navier-Stokes equations on a parallel supercomputer, I don’t think we’ll be seeing this anytime soon – but hope is out there for the future.

“Cardiac Imaging Diagnosis of Ischemia-Causing Coronary Stenoses by Noninvasive Fractional Flow Reserve Computed From Coronary Computed Tomographic Angiograms”
http://www.theheart.org/article/1299631.do

Soft Drinks & Youth Aggression

This is not an EM article – but it was too bizarre to pass up.  Apparently, the use of soft drinks and junk food is a validated legal strategy for justifying homicide (e.g., the ‘Twinkie Defense’) – and this study finds an association to support it.

2,725 Boston high-school students surveyed regarding non-diet soft drink use and violence towards peers, dates, children, or firearm use.  Attempting to control for other factors, they eventually find statistically significant associations between youths who drink >5 cans of soft drinks in a week and increased alcohol use, increased tobacco use, as well as all categories of violence.  In fact, with all four categories of violence, the incidence of each increased in a dose-dependent manner with soft drink consumption.

This is, of course, an observed association, not necessarily a causal relationship, although the authors speculate on how sugars and caffeine might incite aggression.  If you are the parent of a high-school student, it isn’t necessarily going to prevent violence to deny them access to non-diet soft drinks – but, if your high-school student is a heavy soft drink consumer, look out!

“The ‘Twinkie Defense’: the relationship between carbonated non-diet soft drinks and violence perpetration among Boston high school students.”
http://injuryprevention.bmj.com/content/early/2011/10/14/injuryprev-2011-040117.abstract

Do/Don’t Scan the Trauma Patient

In a study attempting to build consensus, they discovered philosophical differences between the trauma team and the emergency physician.

This is a prospective observational study in which 701 blunt trauma activations at LAC-USC were enrolled, with the EP and the trauma team each giving an opinion on which CT studies were necessary.  The authors then reviewed which scans were obtained, sorted out the scans that were undesired by one or both physicians, and determined whether any injuries would be missed.

Bafflingly, 7% of the 2,804 scans obtained during the study period were deemed unnecessary by both the emergency physician and the trauma attending – yet were still performed.  The remaining 794 undesired scans were desired by the trauma team but not the emergency physician.  Their question – would anything of significance been missed if the scans had been more selectively ordered?

The answer is – yes and no.  The trauma surgeon authors state yes, and justify that by saying that many of the abnormalities missed on CT required closer monitoring – just because none of the missed injuries deteriorated during the study period does not mean they were not significant.  The emergency physician authors point to a 56% reduction in pan-scanning, the benefits of radiation and cost reductions, and hang their hats on the fact that none of the hypothetically missed injuries changed management.

So, who is right?  Both, and neither, of course.  Emergency physicians and trauma teams should work on developing evidence-based clinical decision rules to support selective scanning in blunt trauma – and then try this study again to see if they can generate results they can agree on.

Definitely a fun read.

As far as medical literature goes, of course.

“Selective Use of Computed Tomography Compared With Routine Whole Body Imaging in Patients With Blunt Trauma.”
www.ncbi.nlm.nih.gov/pubmed/21890237

ECMO For Influenza

Not many institutions in the U.S. are set up for ECMO in adults, particularly in the Emergency Department, but there are several small datasets out there indicating it should be a significant part of our arsenal for selected patients.  This is a review of ECMO’s use in H1N1 influenza-associated ARDS in England during the “Swine Flu” pandemic.

The authors retrospectively reviewed 80 patients with H1N1 from prospectively collected cohort data, all of whom required critical care for ARDS and were referred for ECMO in the United Kingdom.  Through some data calisthenics, these 80 patients were compared to matching subgroups of patients out of 1,756 in the H1N1 critical care cohort.  Of the 80 patients referred for ECMO, only 69 actually received it.  However, when compared to these 80 patients in an intention-to-treat analysis, there was a significant survival advantage associated with referral to ECMO – approximately 24% mortality in the ECMO-referral group compared to 46-52% in the matched controls, depending on which method they used to identify matched controls.
Not a big stretch to interpret this as a positive treatment association for ECMO in H1N1-associated ARDS.  But, I’d still get your flu shot.
“Referral to an Extracorporeal Membrane Oxygenation Center and Mortality Among Patients With Severe 2009 Influenza A(H1N1)”

EMS Blood Pressures Aren’t Unreliable

Ever since a trauma patient billed as normotensive with stable vital signs rolled off the elevator with CPR in progress having “just lost pulses”, I’ve been somewhat skeptical of my prehospital report, including vital signs.  This study, at least, supports a position that, barring untruthfulness, EMS providers vital signs are usually not clinically significantly different than vital signs obtained on arrival to the Emergency Department – even if observed techniques for EMS providers weren’t perfect.

The first phase study looked at 100 patients arriving in the Emergency Department.  BP measurements were obtained within 5 minutes of arrival, and compared to the reported measurement from EMS.  There was approximately a 17mmHg +/- spread to the systolic pressures measured by EMS compared to the first BP in the Emergency Department.

The second phase of the study had observers riding with EMS and documenting the technique at which they used to find vital signs – and then having the research assistants performing the same measurement in the field as well.    In this phase, EMS providers systolic pressure was only a 10.1mmgHg +/- spread away from the research assistant – despite having ideal technique deficiencies and a terminal digit preference for numbers ending in zero.

The article concludes that EMS providers measurements had poor agreement with subsequent measurements, and that the differences were clinically significant.  However, based on the distribution of error in their Bland-Altman plots, I disagree that assessment, as most of the variability occurred throughout a range of inconsequential systolic pressures between 120 and 170.  They unfortunately had very few patients with clinically important hypo- or hypertension, so the question really remains unanswered whether EMS measurements at the clinically important extremes are reliable.

I do find it rather entertaining that their methods included a “specially trained research assistant” to measure blood pressure, referred to in the title as an “expert”.  You can be an “expert” in anything nowadays, apparently.

“Agreement between emergency medical services and expert blood pressure measurements.”
www.ncbi.nlm.nih.gov/pubmed/21982624

A Third of TPA Patients Do Not Have Stroke

…but they almost all do well!  Only 5.1% of patients without stroke who receive TPA end up with intracerebral hemorrhage – so it’s OK that we give TPA to a ton of patients without a confirmed diagnosis of stroke, right?

This is a retrospective Finnish registry study of 1,104 consecutive TPA patients enrolled in a prospective cohort.  Of these, 119 had basilar artery occlusion, which is angiographically proven prior to treatment, and are excluded from their analysis, and a couple others were excluded for other reasons.  This left 985 patients who were initially diagnosed with ischemic stroke, and, eventually, 14 of those patients were diagnosed as a stroke mimic such as migrane, epilepsy, or a demyelinating disorder.  The authors then go on to say that stroke mimics such as these accounted for a mere 1.4% of all TPA patients, and none of them had ICH.

But, this isn’t exactly a true reading of their data.  The authors also state that 275 of their patients had “neuroimaging negative ischemic stroke”, which is to say, their follow-up MRI detected no sign of infarct.  Now, there is a false-negative rate on DWI MRI for stroke, but it’s in the range of 5% for acute infarcts, and generally involves small lacunar, small cortical, and some posterior circulation strokes.  Not only that, it’s reasonable to suggest that around 40% of TIAs actually have DWI or FLAIR sequence abnormalities as well.

So, some of their “neuroimaging negative ischemic stroke” group probably does have ischemic stroke with false negative MRI – but not 30% of the study population.  And, some of their neuroimaging positive group is likely false positive from TIA as well.  These numbers for stroke mimics are also far below other reported case series, which have estimated 10-30% incidence, depending on whether TIAs are included.

I absolutely cannot fathom this line of reasoning and distortion Neurology is developing in justify recklessly pushing TPA onto a larger population.

“Stroke Mimics and Intravenous Thrombolysis”
http://www.ncbi.nlm.nih.gov/pubmed/22000770