The EHR – A Tool For Blocking Admissions

This is a mildly entertaining ethnographic study of how ED physicians, IM physicians, and surgeons used the Electronic Health Record (EHR) in the context of patient care in a tertiary medical center.

Essentially, the authors observed and interviewed residents and attendings in their use of the EHR, and identified its use in a function termed “chart biopsy” during the admission handoff process.  Inpatient teams were observed using the EHR to get a quick overview of the patient prior to the handoff, to provide the foundation for the history & physical, and – most entertainingly – to use as a weapon in negotiation and “blocking” potential admissions with ED physicians.  Other amusing anecdotes include the authors’ characterization of inpatient physicians feeling “less ‘at the mercy’ of ED physicians” after doing a pre-handoff chart biopsy, or feeling as though they could guard against the “disorganized ramblings” off the handoff process.

Overall, the authors correctly identify EHRs as increasingly prevalent supplements to traditional information gathering techniques, and make a reasonable proposal for evolution in EHRs to aid the “chart biopsy” process.

Chart biopsy: an emerging medical practice enabled by electronic health records and its impacts on emergency department-inpatient admission handoffs.”
http://www.ncbi.nlm.nih.gov/pubmed/22962194

Unnecessary Post-Reduction X-Rays?

Falling into the “well, duh” sort of category that cuts through the dogmatic haze, this article examines the ordering of post-reduction radiographs in the Emergency Department.

Specifically, this group of orthopedists from New York City looks at X-ray utilization and length-of-stay after consultation and management of minimally displaced, minimally angulated extremity fractures.  They note that, of 342 fractures meeting study criteria, 204 of them subsequently received post-splinting radiography.  They note that none of the patients receiving post-reduction radiography had any change in alignment or change in splint application, and this practice resulted in significantly longer ED length-of-stay.

This leads them to their conclusion that minimally displaced, minimally angulated extremity fractures that do not receive manipulation when splinting should not be re-imaged after splint application.  And, this seems like a fairly reasonable conclusion.  It’s retrospective, the outcomes are surrogates for patient oriented-outcomes, etc., and it would be reasonable to re-evaluate this conclusion in a prospective trial –   but if your practice is already to not routinely re-image, this supports continuing your entirely reasonable clinical decision-making.

“Post-Splinting Radiographs of Minimally Displaced Fractures: Good Medicine or Medicolegal Protection?”
http://jbjs.org/article.aspx?articleid=1356145

Longer Resuscitation “Saves”

This article made the rounds a couple weeks ago in the news media, probably based on the conclusion from the abstract stating “efforts to systematically increase the duration of resuscitation could improve survival in this high-risk population.”


They base this statement off a retrospective review of prospectively gathered standardized data from in-hospital cardiac arrests.  Comparing 31,000 patients with ROSC following an initial episode of cardiac arrest with a cohort of 33,000 who did not have ROSC – the authors found that patients who arrested at hospitals with higher median resuscitation times were more likely to have ROSC.  Initial ROSC was tied to survival to discharge, where hospitals with the shortest median resuscitation time having a 14.5% adjusted survival compared to 16.2% at hospitals with the longest resuscitations.


Now, if you’re a glass half-full sort of person, “could improve survival” sounds like an endorsement.  However, when we’re conjuring up hypotheses and associations from retrospective data, it’s important to re-read every instance of “could” and “might” as “could not” and “might not”.  They also performed a horde of patient-related covariates, which gives some scope of the difficulty of weeding out a significant finding from the confounders.  The most glaring difference in their baseline characteristics was the 6% absolute difference in witnessed arrest – which if not accounted for properly could nearly explain the entirety of their outcomes difference.


It’s also to consider the unintended consequences of their statement.  What does it mean to continue resuscitation past the point it is judged clinically appropriate?  What sort of potentially well-meaning policies might this entail?  What are the harms to other patients in the facility if nursing and physician resources are increasingly tied up in (mostly) futile resuscitations?  How much additional healthcare costs will result from additional successful ROSC – most of whom are still not neurologically intact survivors?


“Duration of resuscitation efforts and survival after in-hospital cardiac arrest: an observational study

www.thelancet.com/journals/lancet/article/PIIS0140…9/abstract

How Preposterous News Propagates

Every so often – perhaps more frequently, if you’re continuously canvassing the literature – there’s a rapturous press release regarding a new medical innovation that seems too good to be true.  And, you wonder, how does the lay media get it so wrong?

This study reviewed a consecutive convenience sample of published literature, looking for articles resulting in press releases.  Then, they looked for elements of the article that made it into the press release, as well as the relative accuracy of the release compared with the overall findings of the article.  Essentially, what they found is that press releases were most likely to have “spin” if the conclusion of the article abstract misrepresented the study findings with “spin”.

The authors also have an interesting summary of the sort of “spin” found in abstracts that misrepresent study findings.  These include:

 • No acknowledgment of nonstatistically significant primary outcome
 • Claiming equivalence when results failed to demonstrate a statistically significant difference
 • Focus on positive secondary outcome
 • Nonstatistically significant outcome reported as if they were significant

…and several others.

“Misrepresentation of Randomized Controlled Trials in Press Releases and News Coverage: A Cohort Study”

When Positive D-Dimers are Negative

This is the latest article from Jeff Kline, published in Thrombosis and Haemostasis (don’t you all subscribe to that, too?), concerning pulmonary embolism and d-Dimer.

Wouldn’t it be great if the d-Dimer wasn’t a dichotomous cut-off?  Where, if a patient were of sufficiently low pre-test probability, a d-Dimer value that was nearly negative still contributed adequately to a negative likelihood ratio to reduce the probability of a significant pulmonary embolism?  Well, that’s the theory behind this article – which looks at d-Dimer measurements combined with age, Wells’ score, and Revised Geneva scores.

There are a lot of complex tables in this article breaking down the various potential cut-off values for d-Dimer along with different pre-test probabilities, and the concept presented is that potentially higher cut-off values of d-Dimer can be used without missing PEs larger than sub-segmental.  This is presented in context that a higher cut-off might allow reductions in imaging, which seems fair.

However, the most interesting thing in this article to me is Figure 3 – which is d-Dimer concentration compared with fractional obstruction of pulmonary vascular tree.  It is, unfortunately, pretty clear there’s not a great linear relationship between dimer and pulmonary obstruction.  Most low d-Dimers had < 5% obstruction of the vascular tree, but at least one patient with a “negative” d-Dimer had 20% obstruction.  Beyond that, patients were just as likely to have 90% obstruction with modestly elevated d-Dimers than with massively elevated d-Dimers.

“D-dimer threshold increase with pretest probability unlikely for pulmonary embolism to decrease unnecessary computerized tomographic pulmonary angiography”
www.ncbi.nlm.nih.gov/pubmed/22284935

Not-So Routine Surgery on Dabigatran

This correspondence, published in Blood in March, was probably pretty easy to overlook.


A patient enrolled in RE-LY, the trial comparing dabigatran and warfarin for non-valvular atrial fibrillation, underwent open aortic valve replacement surgery.  As instructed, he discontinued his dabigatran two days prior to the surgery.

Had a little bit of a bleeding problem.

After 26 units of RBCs, 5 packs of platelets, 22 units of FFP, 5 x 10 units of cryoprecipitate, two doses of protamine, two doses of tranexamic acid, and five doses of Factor VIIa, the patient was finally stable enough to be evacuated to the ICU for dialysis to remove the remaining dabigatran.

What’s most fabulously ironic about this correspondence is that the authors use this horrifying case to sprightly conclude Factor VIIa and hemodialysis are viable and effective reversal strategies for dabigatran-associated bleeding.

The patient – “The postoperative course was complicated by prolonged ventilation/Enterobacter pneumonia, asymptomatic nonocclusive femoral DVT (by surveillance ultrasonography [postoperative day (POD 7)]), and acute-on-chronic renal failure. Discharge to a rehabilitation facility occurred on POD56.” – probably disagrees.

Would you be surprised if I mentioned there’s a COI issue involving the authors and the manufacturer?

“Recombinant factor VIIa (rFVIIa) and hemodialysis to manage massive dabigatran-associated postcardiac surgery bleeding”
www.ncbi.nlm.nih.gov/pubmed/22383791

Dabigatran – It’s Everywhere!

Dabigatran, you may know it at Pradaxa, the first in a string of potential blockbuster oral anticoagulants, has a few problems.  Lack of effective reversal options, poor prescriber understanding of drug-drug and GFR interactions, and reduced dosing options should make physicians wary of this medication.

Well, it’s not.

This is a dataset gleaned by an ongoing physician audit covering ~4800 U.S. practioners between 2007 and 2011, used to estimate prescription trends for the U.S.  Warfarin prescriptions were reasonably stable between 2007 and 2010, but then have dropped approximately 20% over the course of 2011.  The medication taking up the slack?  Dabigatran.

If you accept the findings from RE-LY, then you’re probably OK with its use in non-valvular atrial fibrillation.  Unfortunately, 37% of the prescriptions were off-label, outside the FDA approved indications. Then, within the remaining 63%, there’s no breakdown for whether it was valvular or non-valvular atrial fibrillation.  So, the percentage of off-label use is probably even higher than found in the data.

It would seem to be prudent to be cautious with a new medication that’s already being investigated by the FDA for serious bleeding complications.  Luckily for the manufacturer, that’s not happening, and prescription expenditures for dabigatran already exceed those for warfarin – over $160 million per quarter.

“National Trends in Oral Anticoagulant Use in the United States, 2007 to 2011”
http://www.ncbi.nlm.nih.gov/pubmed/22949490

More Failed Therapies for Sinusitis

For routine, office-based diagnoses of acute sinusitis, we’ve seen that antibiotics are unlikely to be beneficial.  The other theory behind treatment is attenuation of the inflammatory response, promoting sinus drainage.  Intranasal steroid sprays have inconclusive data.  This is a trial of systemic steroids, theorizing that intranasal steroids suffer from inadequate tissue penetration.

There are a lot of issues with this trial.  Whether it’s clinically significant or not, the 30mg/day dose of prednisolone is below the typically used doses of 50mg or 60mg.  There were 54 treatment locations and 68 family physicians involved in this study over a 2 1/2 year period – and only managed to enroll 185 patients.  For a problem “frequently encountered” in primary care, it’s a little hard to have confidence there aren’t biases present with enrollment.

The authors followed many different clinical outcomes, as well as the SNOT-20 score, at several different time points, and the easiest way to sum it up is to say there are probably no clinically relevant differences between groups.  The trends nearly all favored prednisolone, but the absolute differences in outcomes provided NNT between 10 and 33.  A larger trial might have detected a statistically significant benefit to steroids – or it might not – but most enrolled patients had symptom improvement, regardless.

Systemic corticosteroid monotherapy for clinically diagnosed acute rhinosinusitis: a randomized controlled trial”
www.ncbi.nlm.nih.gov/pubmed/22872770

It’s Too Hot To Fight & Other Fables

There’s a mythology regarding temperature and violent crime – both increase in tandem up until a certain point, at which it becomes “too warm”.  This study, a retrospective analysis of violent crime from a six-year period in Dallas, TX, generally confirms the increase in violence as the temperature increases.

The authors additionally propose, however, a curvilinear relationship based on the data that interprets an inflection point at 80-89 degrees a bit aggressively, considering they only have one data point above 80-89 with which to define the further trend.  The absolute differences between total numbers of violent assaults in each temperature bracket are small enough, it’s a little hard to confidently say there’s a point at which it becomes too hot for violent crime.  It makes sense, of course, but that’s more editorializing.

Perhaps they could attempt to externally validate these findings in Iraq – which seems awfully hot and violent.  They also note there is a strong correlation between temperature and hours of daylight – but it seems as though that’d be rather difficult to control for one or the other.

And, tying this entire issue into climate change is another unusual matter entirely….

“Temperature and Violent Crime in Dallas, Texas: Relationships and Implications of Climate Change”
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3415828/

The End of IABP?

Adding to the “don’t do anything, just stand there!” file, another relatively frequently used cardiovascular support tool – intra-aortic balloon counterpulsation – might be on the chopping block.

Typically used in cases of severe cardiogenic shock secondary to acute myocardial infarction, IABP is used to reduce strain on the stunned myocardium.  The first IABP-SHOCK pilot of 45 patients showed no mortality difference, but a significant improvement in BNP levels with IABP use.  This is the follow-up study, enrolling 600 patients to IABP or best available medical therapy.

Both groups were similarly ill – the IABP group had 6% more anterior STEMIs – and had nearly identical outcomes.  There were 1.5% more survivors in the IABP group, but the p value was 0.69.  Adverse events were similar – although the control group tended towards increased sepsis, which seems a little odd.  There was an expected random assortment of subgroups favoring one therapy or another, but nothing that would seem to be specifically hypothesis generating.

In the end, the authors rather grimly state that, despite some surrogate markers appearing to be improved in the IABP group, there is no evidence to support routine use of IABP in cardiogenic shock secondary to acute myocardial infarction.

“Intraaortic Balloon Support for Myocardial Infarction with Cardiogenic Shock”
http://www.nejm.org/doi/full/10.1056/NEJMoa1208410