Do/Don’t Scan the Trauma Patient

In a study attempting to build consensus, they discovered philosophical differences between the trauma team and the emergency physician.

This is a prospective observational study in which 701 blunt trauma activations at LAC-USC were enrolled, with the EP and the trauma team each giving an opinion on which CT studies were necessary.  The authors then reviewed which scans were obtained, sorted out the scans that were undesired by one or both physicians, and determined whether any injuries would be missed.

Bafflingly, 7% of the 2,804 scans obtained during the study period were deemed unnecessary by both the emergency physician and the trauma attending – yet were still performed.  The remaining 794 undesired scans were desired by the trauma team but not the emergency physician.  Their question – would anything of significance been missed if the scans had been more selectively ordered?

The answer is – yes and no.  The trauma surgeon authors state yes, and justify that by saying that many of the abnormalities missed on CT required closer monitoring – just because none of the missed injuries deteriorated during the study period does not mean they were not significant.  The emergency physician authors point to a 56% reduction in pan-scanning, the benefits of radiation and cost reductions, and hang their hats on the fact that none of the hypothetically missed injuries changed management.

So, who is right?  Both, and neither, of course.  Emergency physicians and trauma teams should work on developing evidence-based clinical decision rules to support selective scanning in blunt trauma – and then try this study again to see if they can generate results they can agree on.

Definitely a fun read.

As far as medical literature goes, of course.

“Selective Use of Computed Tomography Compared With Routine Whole Body Imaging in Patients With Blunt Trauma.”
www.ncbi.nlm.nih.gov/pubmed/21890237

When Is Blunt Chest Trauma Low-Risk?

According to this study, always – but rarely.

This is a prospective 3-center trauma study attempting to discern clinical variables that predicted the absence of serious traumatic chest injury in the setting of blunt trauma.  2,628 subjects enrolled, with 271 of them diagnosed with a serious injury – pneumothorax, hemothorax, great vessel injury, multiple rib fractures, sternal fracture, pulmonary contusion, and diaphragmatic rupture.  They do a recursive partitioning analysis and identify a combination of seven clinical findings that had a 99.3% (97.4 – 99.8) sensitivity for serious traumatic injuries.

But, I might be missing the point of this instrument a little bit.  Only 10% of their cohort had a traumatic injury – yet out of the remaining 90% without serious traumatic injury, their rule could only carve out 14% as low risk.  These low risk patients, the authors then propose, obviates any chest imaging at all.  While I am all for reducing unnecessary testing, this seems like an awfully low yield decision rule.  Yes, this study identifies young patients who are perfectly fine after a low-risk blunt trauma and do not need even an x-ray – but I’d really rather see more work preventing some of the 584 chest CTs performed in this cohort.  Additionally, their criterion standard for negative imaging is inadequate – most received CXR alone and there’s no follow-up protocol to test for possible missed injuries, whether clinically significant or not.

Considering the criteria they identified, it seems they could almost get equal or greater reduction in imaging if the clinicians were simply a little more thoughtful with respect to knee-jerk imaging in trauma.

“Derivation of a Decision Instrument for Selective Chest Radiography in Blunt Trauma.”
www.ncbi.nlm.nih.gov/pubmed/21045745

The PERC Rule Mini-Review

Journal club this month at my institution involved the literature behind the derivation and validation of the PERC (Pulmonary Embolism Rule-Out Criteria) Rule.  So, as faculty, to be dutifully prepared, I read the articles and a smorgasbord of supporting literature – only to realize I’m working the conference coverage shift.  Rather than waste my notes, I’ve turned them into an EMLit mega-post.

Derivation
The derivation of the PERC rule in 2004 comes from 3,148 patients for whom “an ER physician thought they might have Pulmonary Embolism”.  Diagnosis was confirmed by CTA (196 patients), CTA + CTV (1116), V/Q (1055) + duplex U/S (372), angiography (11), autopsy (21), and 90-day follow-up (650).  348 (11% prevalence) were positive for PE.  They then did a regression analysis on those patients and came up with the PERC rule, the eight-item dichotomous test for which you need to answer yes to every single question to pass.

The test case for the derivation came from 1,427 “low-risk” patients that were PE suspects, and as such, had only a d-Dimer ordered to rule-out PE – and in whom a CTA was performed when positive.  114 (8% prevalence) had PE.  There was also an additional test case of “very low-risk”, 382 patients from another dyspnea study who were enrolled when “an ED physician thought PE was not the most likely diagnosis.”  9 (2.3%) of the very low-risk cohort had PE.

Performance on their low-risk test set was a sensitivity of 96% (CI 90-99%) with a specificity of 27%.  On their very low-risk test set, sensitivity was 100% (59-100%) with a specificity of 15%.

Validation
Multicenter enrollment of 12,213 with “possible PE”.  8,183 were fully enrolled.  51% underwent CTA, 6% underwent V/Q, and everyone received 45-day follow up for a diagnosis of venous thromboembolism.  Overall, 6.9% of their population was diagnosed with pulmonary embolism.

Of these, 1,952 were PERC negative – giving rise to a 95.7% sensitivity (93.6-97.2%).  However, the authors additionally identify a “gestalt low-risk” group of 1,666 that had only 3.0% prevalence of PE, apply the PERC rule to that, and come up with sensitivity of 97.4% (95.8 – 98.5%).

The authors then conclude the PERC rule is valid and obviates further testing when applied to a gestalt low-risk cohort in which the prevalence is less than 6%.

Other PERC Studies
Retrospective application of PERC to another prospective PE database in Denver.  Prevalence of PE is 12% of 134 patients.  Only 19 patients were PERC negative, none of whom had PE.  Sensitivity is 100% (79-100%).

Retrospective application of PERC to patients receiving CT scans in Schenectady.  Prevalence of PE was 8.45% of 213.  48 were PERC negative, none of whom had PE.  Sensitivity is 100% (79-100%).

Effectiveness study of PERC in an academic ED (Carolinas).  183 suspected PE patients, PERC was applied to 114, 65 of whom were PERC negative.  16 of the PERC negative underwent CTA, all negative.  14 day follow-up of the remaining 49 also indicated no further PE diagnosis.  No sensitivity calculation.

Retrospective application of PERC to prospective PE cohort in Switzerland.  Prevalence of PE was 21.3% in 1,675 patients.  In the 221 patients who were PERC negative, 5.4% had PE (3.1 – 9.3%) for a sensitivity of 96.6 (94.2 – 98.1%).  The subset of PERC negative who were also low-risk by Geneva Score actually had a higher incidence of PE at 6.4%.

Summary
So, PERC can only be applied to a population you think is low-risk for PE – for which you can use clinical gestalt or Wells’ – because it looks like Wells low-risk is 1.3% (0.5-2.7%) to 2% (0-9%).  But, you can’t use Geneva because that prevalence is closer to 8% for low-risk – and that’s essentially what the Swiss study shows.

But in this already very low-risk population, the question is, what is the role of PERC?  Clinical gestalt in their original study actually worked great.  Even though clinicians were only asked to risk stratify to <15%, they risk stratified to 3.0% prevalence of PE.  Which, of course, means our estimation of the true risk of pulmonary embolism is absolutely bonkers.  If you take a gestalt or Wells’ low-risk population, apply PERC, and it’s negative – your population that nearly universally didn’t have a PE still doesn’t have a PE, and it doesn’t get you much in absolute risk reduction.  You probably shouldn’t have even considered PE as a diagnosis other than for academic and teaching reasons if they’re Wells’ low-risk and PERC negative.

Then, if you take the flip side – what happens if your patient is PERC positive?  You have a low-risk patient whose prevalence for PE is probably somewhere between 1 and 5%, and now you’ve got a test with a positive LR of 1.24 – it barely changes anything from a statistical standpoint.  Then, do you do a d-Dimer, which has a positive LR between 1.6 and 2.77?  Now you’ve done a ton of work and painted yourself into a corner and you have to get a CTA on a patient whose chance of having a PE is still probably less than 10%.

That’s where your final problem shows up.  CTA is overrated as a diagnostic test for pulmonary embolism.  In PIOPED II, published 2006 in NEJM, CTA had 16 false positives and 22 true positives in their low risk cohort – 42% false positive rate – and this is against a reference standard for which they estimated already had a 9% false positive and 2% false negative rate.  CTA is probably better now than it once was, but it still has significant limitations in a low-risk population – and I would argue the false positive rate is even higher, given the increased resolution and ability to discern more subtle contrast filling defects.

So, this is what I get out of PERC.  Either you apply it to someone you didn’t think had PE and it’s negative and you wonder why you bothered to apply it in the first place – or you follow it down the decision tree and you end up at a CTA for whom you can flip a coin to believe whether the positive result is real or not.

And, I don’t even want to get into the clinical relevance of diagnosis and treatment of those tiny subsegmental PEs we’re “catching” on CTA these days.

“Clinical criteria to prevent unnecessary diagnostic testing in emergency department patients with suspected pulmonary embolism”
www.ncbi.nlm.nih.gov/pubmed/15304025

“Prospective multicenter evaluation of the pulmonary embolism rule-out criteria”
www.ncbi.nlm.nih.gov/pubmed/18318689

“Assessment of the pulmonary embolism rule-out criteria rule for evaluation of suspected pulmonary embolism in the emergency department”
www.ncbi.nlm.nih.gov/pubmed/18272098

“The Pulmonary Embolism Rule-Out Criteria rule in a community hospital ED: a retrospective study of its potential utility”
www.ncbi.nlm.nih.gov/pubmed/20708891

“Prospective Evaluation of Real-time Use of the Pulmonary Embolism Rule-out Criteria in an Academic Emergency Department”
www.ncbi.nlm.nih.gov/pubmed/20836787

“The pulmonary embolism rule-out criteria (PERC) rule does not safely exclude pulmonary embolism”
www.ncbi.nlm.nih.gov/pubmed/21091866

“Multidetector Computed Tomography for Acute Pulmonary Embolism”
www.ncbi.nlm.nih.gov/pubmed/16738268

“D-Dimer for the Exclusion of Acute Venous Thrombosis and Pulmonary Embolism”
www.ncbi.nlm.nih.gov/pubmed/15096330

http://www.mdcalc.com/perc-rule-for-pulmonary-embolism

ACI-TIPI For Predicting Cardiac Outcomes

In an earlier post, I noted an article that had done a systematic review finding 115 publications attempting to create or validate clinical prediction rules for chest pain.  Well, here’s number 116.

The ACI-TIPI (Acute Cardiac Ischemia Time-Insensitive Predictive Instrument) is computerized analysis software that generates a score regarding the likelihood of cardiac ischemia based on age, gender, chest pain and EKG variables.  It’s actually a product marketed and sold by Philips.  These authors tried to evaluate how predictive this instrument was for predicting 30-day events, with an interest in identifying a group that could be safely discharged from the Emergency Department.

In an institution with 55,000 visits a year, the authors recruited only 144 chest pain patients – which is the first red flag.  It doesn’t matter how good your prediction rule is if you only recruit 144 patients – your confidence intervals will be terrible, and their sensitivities for identifying 30 day cardiac outcomes are 82-100% at best.  And, yes, they did say if the ACI-TIPI score is <20, it had a purportedly useful negative predictive value.

So, I suppose this paper doesn’t really tell us much – and even if the data were better, I’m not sure the sensitivity/specificity of this ACI-TIPI calculation would meet a useful clinical threshold to reduce low-risk hospitalizations any better than clinical gestalt.  I’ll be back with you when I find risk-stratification attempt 117….

“Prognostic utility of the acute cardiac ischemia time-insensitive predictive instrument (ACI-TIPI)”
www.intjem.com/content/4/1/49

We Still Can’t Predict Cardiac Outcomes in Syncope

The authors of this article claim that the San Francisco Syncope Rule – which we’ve already put out to pasture – has simple EKG criteria that “can help predict which patients are at risk of cardiac outcomes”.

And, they’re only possibly partly right.  Out of the 644 patients in their cohort they followed for syncope, they had 42 cardiac events within their 7-day follow-up period.  Of those 42, 36 met the criteria for “abnormal EKG”.  If you had a completely normal EKG, it was 6 out 428 that had a cardiac event, which gave them a 99% NPV upon which they base the quoted statement above.

But the positive criteria wasn’t adequately predictive enough to be helpful in making hospitalization decisions – 216 patients had abnormal EKGs, but only 36 had a cardiac outcome.  And then, there are significant differences in the patients who had abnormal EKGs, and even more differences with the patients who had cardiac outcomes – the cardiac outcome cohort had an average age of 78.6 compared to the noncardiac outcome cohort average age of 61.0, with probably even more comorbid differences they don’t tell us about.

So, a normal EKG is probably helpful in making your decision – but being younger and healthier probably accounts for more of the differences between their groups.

“Electrocardiogram Findings in Emergency Department Patients with Syncope”
www.ncbi.nlm.nih.gov/pubmed/21762234

Time To Let ABCD2 Die

The problem – the most difficult clinical situations are the ones where we need a handy decision tool – and the hardest to come up with an effective one.  Syncope rules, PE prediction rules, ACS prediction rules, and now TIA evaluation.

The most important number to come out of this paper is probably 1.8% – the number of patients with a TIA who went on to have a stroke in the next seven days.  That’s 38 out of their 2056 patients enrolled.  The next number is 2.7%, which is the 56 patients who had another TIA within 7 days.  So somehow a rule has to magically pick out that tiny proportion of patients who are going to have bad outcomes without excessively testing the remaining supermajority.

Nearly everyone had a CT of the head, nearly everyone had an EKG, very few (15% with an ABCD2 score ≤ 5 and 22.% with a score > 5) had consultation with a neurologist, and even fewer were admitted.  The specificity for stroke within 7 days with a score >2 – the AHA definition of “high risk” – is only 12.5%.  Not only that, but there was significant disagreement between enrolling physicians and the study center regarding the correct ABCD2 score for a patient.

So, in the end, ABCD2 is difficult to apply and only minimally useful.  You’re going to miss half the strokes at 7 days if you apply it in a situation where the specificity is >50% – so, sure, a sky-high score tells you they’re in trouble, but that still doesn’t help you discharge the majority of your TIAs safely for outpatient follow-up.

“Prospective validation of the ABCD2 score for patients in the emergency department with transient ischemic attack.”
www.ncbi.nlm.nih.gov/pubmed/21646462

ASPECT 2-Hour Rule-Out

Low-risk chest pain – if your ED doesn’t already have a chest pain unit set up for you to painlessly move patients through their enzymatic and non-invasive testing, you’re probably trying to find safe ways to discharge your chest pain patients home to avoid the repetitive calls to an unsympathetic hospitalist.  Problem is, without some kind of imaging or functional study, you’re going to invariably get burned.  This is another one of the TIMI-score-plus-X attempts at risk-stratifying patients in a prospectively applied dry run of their protocol.  It’s TIMI 0 patients plus normal EKG plus negative zero and two-hour CKMB/troponin/myoglobin.  Basically, 10% of their chest pain cohort fit this essentially zero-risk profile and were enzymatically ruled out.  And 0.9% (0.02 to 2.1%) of this slam-dunk non-cardiac group came back with an MI within 30 days.

Now, for a rational person who thinks that we’re spending altogether too much money and resources to capture every last potential cardiac event – that sounds pretty reasonable.  Home with follow-up.  The problem is, the non-invasive testing in basically the same sort of low-risk cohort, whether stress or CTA, the negative tests have 6+ month event-free periods.  So, the standard of care is unfortunately moving away from “no heart attack today!” to prognosticating distant events.

The other great thing about this article was their mini systematic review where they say there’s 115 of these prediction rules in the literature in the last fifteen years.  Clearly someone everyone wants, but also something we can’t get right….

http://www.ncbi.nlm.nih.gov/pubmed/21435709