iPhone Medical Apps To The Rescue

In this study, the author and creator of “PICU Calculator” for iPhone details the superiority of a medical student with a smartphone over an attending using the pharmacy reference book.  A few entertaining tidbits from their main results:
 – Medical students don’t know how a book functions – failed to correctly complete any pediatric dosing task using the British National Formulary for Children.
 – Residents and attendings managed to make the book work for them about half the time.
 – Overall across all levels of training, 35 for 35 in correct dosage and volume using the iPhone app – with a mean time savings of over 5 minutes.

So, when the author of an iPhone app choses a clinical task his app is designed to replace, it works great!  But, the larger point – as we already knew – there is a role for well-designed point-of-care electronic tools, so we shouldn’t give up on our CPOE and EHR kludge so soon.

“Students prescribing emergency drug infusions utilising smartphones outperform consultants using BNFCs.”
www.ncbi.nlm.nih.gov/pubmed/21787737

Who Are The Readmitted?

Now, where I trained, we were the only useful facility for hundreds of miles – so we actually had a a lot of continuity of care in the Emergency Department.  And nothing beat the continuity we saw when a patient who was discharged in the morning was back in our Emergency Department by evening – and the inevitable question of “how did they screw this up?”

This is a retrospective look at the readmissions from 11 teaching and community hospitals trying describe the readmissions as avoidable vs. unavoidable, characterize the cause for readmission, and see if there were any baseline characteristics that might predict readmission.  They found avoidable readmissions were in the minority, and there was no useful predictive clinical information regarding baseline differences between the readmitted group and the overall cohort – comorbidities, length of stay, new medications, etc.  When patients were avoidably readmitted, however, several recurring factors were noted:
 – Management error (48% of the time)
 – Surgical complications (38.5%)
 – Medication-related event (32.7%)
 – Nosocomial infection (18.3%)
 – System error (15.4%)
 – Diagnostic error (10.6%).

Considering CMS is looking closely at decreasing payments to physicians and hospitals for readmissions, this study provides a small amount of systematic insight into some of the things we’ve all observed anecdotally.

“Incidence of potentially avoidable urgent readmissions and their relation to all-cause urgent readmissions.”
www.cmaj.ca/content/early/2011/08/22/cmaj.110400

Good Thought, But It’s Not Pertussis

A Swiss study in which only 2.5% percent of 1,049 pediatric ambulatory and hospitalized patients presenting with a cough-illness and who were tests for pertussis were culture positive for B. pertussis or parapertussis.  Probably a relatively accurate picture of the general prevalance of pertussis in a non-outbreak situation.  They additionally report that viral superinfection is rare enough to be coincidental – 0.6% – although the authors do note other studies have reported higher incidence, particularly in RSV+ hospitalized children <6 months of age.

So, this data is out the window if there’s an outbreak situation, but the overall clinical take home is that, yet again, our index of suspicion may be too high for an infrequently diagnosed condition – and we should moderate testing in the lower acuity cases.

“Bordetella pertussis and Concomitant Viral Respiratory Tract Infections are Rare in Children With Cough Illness.”
www.ncbi.nlm.nih.gov/pubmed/21407144

Malpractice Risk in Emergency Medicine

I was actually surprised by these statistics – I expected Emergency Medicine to be higher.  After all, we’re meeting people with potentially unrealistic expectations, suffering long wait times, without continuity of care, and potential bad outcomes lurking everywhere.

But, really, our claims against and claims with payout are really pretty much average across specialties.  Neurosurgery and Thoracic Surgery are the nightmare specialties, where nearly a 5th of physicians practicing in those specialties has a claim filed against them each year.  Another interesting statistic was that Gynecology, only a little above average in claims filed against, has the highest percentage of payouts.

Neurosurgery, Neurology, and Internal Medicine lead the way in median payout, but Pediatrics, Pathology, and Ob/Gyn lead the way in mean payout – apparently skewed by the occasional massive award.

Given the legislation pending in many states these days giving additional protections to Emergency Physicians and physicians on-call to Emergency Departments, it’s really not a bad time to be in EM, from a liability standpoint.

“Malpratice Risk According to Physician Specialty”
www.ncbi.nlm.nih.gov/pubmed/21848463

Forced Diuresis Prevents CIN

I admit, I was shocked when I got to the end of the paper and found the authors had no disclosures – it seems nearly every study concerning a commercial product has someone on the payroll.  Heck, the study is even registered with clinicaltrials.gov, and they didn’t change their protocol at all.

Anyway, this paper is in regards to the RenalGuard system, which is basically a closed-loop system that replaces a furosemide forced diuresis with normal saline.  They compare this to “usual therapy”, which, for them, is sodium bicarbonate and n-acetylcysteine (NAC) for the prevention of contrast-induced nephropathy as a result of some iodixanol contrast load.  Basically, they ran this system through a few patients who were at high-risk for CIN for ~5 hours around the time of their contrast procedure and tried to get their urine flow rate >300 mL/hr.  When successful, those patients had significantly less CIN than the “usual therapy” group (10% vs. 20%).

So, seems like it works.  There was more pulmonary edema (3 vs. 1) in the RenalGuard system, and more electrolyte abnormalities to replace, but this is a therapy that might yet have some utility.  It may even be practical in an ED setting, to a limited extent.

“Renal Insufficiency After Contrast Media Administration Trial II (REMEDIAL II)”
www.ncbi.nlm.nih.gov/pubmed/21518686

The PERC Rule Mini-Review

Journal club this month at my institution involved the literature behind the derivation and validation of the PERC (Pulmonary Embolism Rule-Out Criteria) Rule.  So, as faculty, to be dutifully prepared, I read the articles and a smorgasbord of supporting literature – only to realize I’m working the conference coverage shift.  Rather than waste my notes, I’ve turned them into an EMLit mega-post.

Derivation
The derivation of the PERC rule in 2004 comes from 3,148 patients for whom “an ER physician thought they might have Pulmonary Embolism”.  Diagnosis was confirmed by CTA (196 patients), CTA + CTV (1116), V/Q (1055) + duplex U/S (372), angiography (11), autopsy (21), and 90-day follow-up (650).  348 (11% prevalence) were positive for PE.  They then did a regression analysis on those patients and came up with the PERC rule, the eight-item dichotomous test for which you need to answer yes to every single question to pass.

The test case for the derivation came from 1,427 “low-risk” patients that were PE suspects, and as such, had only a d-Dimer ordered to rule-out PE – and in whom a CTA was performed when positive.  114 (8% prevalence) had PE.  There was also an additional test case of “very low-risk”, 382 patients from another dyspnea study who were enrolled when “an ED physician thought PE was not the most likely diagnosis.”  9 (2.3%) of the very low-risk cohort had PE.

Performance on their low-risk test set was a sensitivity of 96% (CI 90-99%) with a specificity of 27%.  On their very low-risk test set, sensitivity was 100% (59-100%) with a specificity of 15%.

Validation
Multicenter enrollment of 12,213 with “possible PE”.  8,183 were fully enrolled.  51% underwent CTA, 6% underwent V/Q, and everyone received 45-day follow up for a diagnosis of venous thromboembolism.  Overall, 6.9% of their population was diagnosed with pulmonary embolism.

Of these, 1,952 were PERC negative – giving rise to a 95.7% sensitivity (93.6-97.2%).  However, the authors additionally identify a “gestalt low-risk” group of 1,666 that had only 3.0% prevalence of PE, apply the PERC rule to that, and come up with sensitivity of 97.4% (95.8 – 98.5%).

The authors then conclude the PERC rule is valid and obviates further testing when applied to a gestalt low-risk cohort in which the prevalence is less than 6%.

Other PERC Studies
Retrospective application of PERC to another prospective PE database in Denver.  Prevalence of PE is 12% of 134 patients.  Only 19 patients were PERC negative, none of whom had PE.  Sensitivity is 100% (79-100%).

Retrospective application of PERC to patients receiving CT scans in Schenectady.  Prevalence of PE was 8.45% of 213.  48 were PERC negative, none of whom had PE.  Sensitivity is 100% (79-100%).

Effectiveness study of PERC in an academic ED (Carolinas).  183 suspected PE patients, PERC was applied to 114, 65 of whom were PERC negative.  16 of the PERC negative underwent CTA, all negative.  14 day follow-up of the remaining 49 also indicated no further PE diagnosis.  No sensitivity calculation.

Retrospective application of PERC to prospective PE cohort in Switzerland.  Prevalence of PE was 21.3% in 1,675 patients.  In the 221 patients who were PERC negative, 5.4% had PE (3.1 – 9.3%) for a sensitivity of 96.6 (94.2 – 98.1%).  The subset of PERC negative who were also low-risk by Geneva Score actually had a higher incidence of PE at 6.4%.

Summary
So, PERC can only be applied to a population you think is low-risk for PE – for which you can use clinical gestalt or Wells’ – because it looks like Wells low-risk is 1.3% (0.5-2.7%) to 2% (0-9%).  But, you can’t use Geneva because that prevalence is closer to 8% for low-risk – and that’s essentially what the Swiss study shows.

But in this already very low-risk population, the question is, what is the role of PERC?  Clinical gestalt in their original study actually worked great.  Even though clinicians were only asked to risk stratify to <15%, they risk stratified to 3.0% prevalence of PE.  Which, of course, means our estimation of the true risk of pulmonary embolism is absolutely bonkers.  If you take a gestalt or Wells’ low-risk population, apply PERC, and it’s negative – your population that nearly universally didn’t have a PE still doesn’t have a PE, and it doesn’t get you much in absolute risk reduction.  You probably shouldn’t have even considered PE as a diagnosis other than for academic and teaching reasons if they’re Wells’ low-risk and PERC negative.

Then, if you take the flip side – what happens if your patient is PERC positive?  You have a low-risk patient whose prevalence for PE is probably somewhere between 1 and 5%, and now you’ve got a test with a positive LR of 1.24 – it barely changes anything from a statistical standpoint.  Then, do you do a d-Dimer, which has a positive LR between 1.6 and 2.77?  Now you’ve done a ton of work and painted yourself into a corner and you have to get a CTA on a patient whose chance of having a PE is still probably less than 10%.

That’s where your final problem shows up.  CTA is overrated as a diagnostic test for pulmonary embolism.  In PIOPED II, published 2006 in NEJM, CTA had 16 false positives and 22 true positives in their low risk cohort – 42% false positive rate – and this is against a reference standard for which they estimated already had a 9% false positive and 2% false negative rate.  CTA is probably better now than it once was, but it still has significant limitations in a low-risk population – and I would argue the false positive rate is even higher, given the increased resolution and ability to discern more subtle contrast filling defects.

So, this is what I get out of PERC.  Either you apply it to someone you didn’t think had PE and it’s negative and you wonder why you bothered to apply it in the first place – or you follow it down the decision tree and you end up at a CTA for whom you can flip a coin to believe whether the positive result is real or not.

And, I don’t even want to get into the clinical relevance of diagnosis and treatment of those tiny subsegmental PEs we’re “catching” on CTA these days.

“Clinical criteria to prevent unnecessary diagnostic testing in emergency department patients with suspected pulmonary embolism”
www.ncbi.nlm.nih.gov/pubmed/15304025

“Prospective multicenter evaluation of the pulmonary embolism rule-out criteria”
www.ncbi.nlm.nih.gov/pubmed/18318689

“Assessment of the pulmonary embolism rule-out criteria rule for evaluation of suspected pulmonary embolism in the emergency department”
www.ncbi.nlm.nih.gov/pubmed/18272098

“The Pulmonary Embolism Rule-Out Criteria rule in a community hospital ED: a retrospective study of its potential utility”
www.ncbi.nlm.nih.gov/pubmed/20708891

“Prospective Evaluation of Real-time Use of the Pulmonary Embolism Rule-out Criteria in an Academic Emergency Department”
www.ncbi.nlm.nih.gov/pubmed/20836787

“The pulmonary embolism rule-out criteria (PERC) rule does not safely exclude pulmonary embolism”
www.ncbi.nlm.nih.gov/pubmed/21091866

“Multidetector Computed Tomography for Acute Pulmonary Embolism”
www.ncbi.nlm.nih.gov/pubmed/16738268

“D-Dimer for the Exclusion of Acute Venous Thrombosis and Pulmonary Embolism”
www.ncbi.nlm.nih.gov/pubmed/15096330

http://www.mdcalc.com/perc-rule-for-pulmonary-embolism

ACI-TIPI For Predicting Cardiac Outcomes

In an earlier post, I noted an article that had done a systematic review finding 115 publications attempting to create or validate clinical prediction rules for chest pain.  Well, here’s number 116.

The ACI-TIPI (Acute Cardiac Ischemia Time-Insensitive Predictive Instrument) is computerized analysis software that generates a score regarding the likelihood of cardiac ischemia based on age, gender, chest pain and EKG variables.  It’s actually a product marketed and sold by Philips.  These authors tried to evaluate how predictive this instrument was for predicting 30-day events, with an interest in identifying a group that could be safely discharged from the Emergency Department.

In an institution with 55,000 visits a year, the authors recruited only 144 chest pain patients – which is the first red flag.  It doesn’t matter how good your prediction rule is if you only recruit 144 patients – your confidence intervals will be terrible, and their sensitivities for identifying 30 day cardiac outcomes are 82-100% at best.  And, yes, they did say if the ACI-TIPI score is <20, it had a purportedly useful negative predictive value.

So, I suppose this paper doesn’t really tell us much – and even if the data were better, I’m not sure the sensitivity/specificity of this ACI-TIPI calculation would meet a useful clinical threshold to reduce low-risk hospitalizations any better than clinical gestalt.  I’ll be back with you when I find risk-stratification attempt 117….

“Prognostic utility of the acute cardiac ischemia time-insensitive predictive instrument (ACI-TIPI)”
www.intjem.com/content/4/1/49

Sometimes, The Pregnancy Test Lies

A couple years ago, my hospital pulled the POC urine pregnancy tests from the ED because of false negatives – leading to incredulous discussions of how it was possible for a nursing assistant to screw up something so simple as a dichotomous colormetric test.

Well, at Washington University, when they had multiple issues with their POC pregnancy test, they investigated the issue in more depth, and this nice little article is an overview of the limitations of the the test.  There are two ways the POC test fails:
 – Not pregnant enough.
 – Too pregnant.

We all know about sensitivity in early pregnancy really only being 97% or so at one week, and no one will fault the test for that.  However, their case series of five patients, all of whose serum hCG was >130,000, are hypothesized to have saturated the reagent to the point of a false-negative test.

In any event, interesting article about something I hadn’t put much thought into.

“‘Hook-Like Effect’ Causes False-Negative Point-Of-Care Urine Pregnancy Testing in Emergency Patients”
http://www.ncbi.nlm.nih.gov/pubmed/21835572

CT Coronary Angiography Proves People WIth CAD Die Sooner

This is a neat study that followed up 23,854 patients from a multicenter CTCA registry – the CONFIRM registry – over three years to evaluate their long term prognostic risk.  And – amazingly enough – the patients who had no coronary artery disease identified on their CTCA had an annualized rate of 0.28% of death from all causes.  Which seems pretty impressive, and it’s better than the people who had non-obstructive and various types of obstructive CAD on their CTCA.

But then, the hazard ratios for patients who had 3-vessel and left main disease on their CTCA was still only as high as six times more likely than the no CAD cohort – which is a lot higher in relative terms, but still not very high in absolute terms – and there were a lot of other comorbidities in these patients that would contribute to their all-cause mortality from non-cardiac causes.  So, yes, not having CAD – as well as being a generally healthy person – helps you live longer.

The question still remains where CTCA fits into an Emergency Department evaluation for chest pain.  We are seeing more and more research now that primary PCI for asymptomatic lesions isn’t any survival benefit over medical management – so identifying these lesions and admitting these patients to cardiology for intervention isn’t going to be in our future.  Considering over 55% of their cohort had either non-obstructive or obstructive disease found, now you’re going to be on the hook for making outpatient CAD risk-modification decisions after cardiology declines them.

Whether CTCA is used should be a standardized, institution-wide decision, because I don’t think anyone wants to take the weight of sorting through all this evidence and risk/benefit ratios as a lone wolf.

“Age- and Sex-Related Differences in All-Cause Mortality Risk Based on Coronary Computer Tomography Angiography Findings”
www.ncbi.nlm.nih.gov/pubmed/21835321

CT Use Is Increasing(ly Justified?)

Retrospective cohort analysis based off the NHAMCS dataset, with all the inherent limitations within.

We have a 330% increase in the use of CT in the Emergency Department – up from 3.2% in 1996 to 13.9%  in 2007.  This increase is pretty stable across all age groups (including a rate of up to nearly 5% now in patients under 18 years of age).  The interesting part of the paper that’s something we didn’t already know, is their data regarding the adjusted rate of hospitalization or transfer after receiving CT.  In 1996, 26% of patients receiving a CT were admitted to the hospital, while now only 12% of patients receiving CT are admitted to the hospital.

The problem is, I’ve seen news organizations running with the conclusion: CT rates might be higher, but since the relative risk of hospitalization is lower after a CT, therefore, it must be preventing hospitalizations.  But, you can’t draw any such conclusion from the data – particularly considering hospitalizations have climbed over that same period.

We just aren’t seeing any data that links the increase in CT use to improved outcomes.  Increased CT usage certainly has its place as the standard of care in many instances, but there’s no silver lining to this 330% increase.

“National Trends in Use of Computer Tomography in the Emergency Department.”
www.ncbi.nlm.nih.gov/pubmed/21115875