Tuesday, August 23, 2011

The PERC Rule Mini-Review

Journal club this month at my institution involved the literature behind the derivation and validation of the PERC (Pulmonary Embolism Rule-Out Criteria) Rule.  So, as faculty, to be dutifully prepared, I read the articles and a smorgasbord of supporting literature - only to realize I'm working the conference coverage shift.  Rather than waste my notes, I've turned them into an EMLit mega-post.

Derivation
The derivation of the PERC rule in 2004 comes from 3,148 patients for whom "an ER physician thought they might have Pulmonary Embolism".  Diagnosis was confirmed by CTA (196 patients), CTA + CTV (1116), V/Q (1055) + duplex U/S (372), angiography (11), autopsy (21), and 90-day follow-up (650).  348 (11% prevalence) were positive for PE.  They then did a regression analysis on those patients and came up with the PERC rule, the eight-item dichotomous test for which you need to answer yes to every single question to pass.

The test case for the derivation came from 1,427 "low-risk" patients that were PE suspects, and as such, had only a d-Dimer ordered to rule-out PE - and in whom a CTA was performed when positive.  114 (8% prevalence) had PE.  There was also an additional test case of "very low-risk", 382 patients from another dyspnea study who were enrolled when "an ED physician thought PE was not the most likely diagnosis."  9 (2.3%) of the very low-risk cohort had PE.

Performance on their low-risk test set was a sensitivity of 96% (CI 90-99%) with a specificity of 27%.  On their very low-risk test set, sensitivity was 100% (59-100%) with a specificity of 15%.

Validation
Multicenter enrollment of 12,213 with "possible PE".  8,183 were fully enrolled.  51% underwent CTA, 6% underwent V/Q, and everyone received 45-day follow up for a diagnosis of venous thromboembolism.  Overall, 6.9% of their population was diagnosed with pulmonary embolism.

Of these, 1,952 were PERC negative - giving rise to a 95.7% sensitivity (93.6-97.2%).  However, the authors additionally identify a "gestalt low-risk" group of 1,666 that had only 3.0% prevalence of PE, apply the PERC rule to that, and come up with sensitivity of 97.4% (95.8 - 98.5%).

The authors then conclude the PERC rule is valid and obviates further testing when applied to a gestalt low-risk cohort in which the prevalence is less than 6%.

Other PERC Studies
Retrospective application of PERC to another prospective PE database in Denver.  Prevalence of PE is 12% of 134 patients.  Only 19 patients were PERC negative, none of whom had PE.  Sensitivity is 100% (79-100%).

Retrospective application of PERC to patients receiving CT scans in Schenectady.  Prevalence of PE was 8.45% of 213.  48 were PERC negative, none of whom had PE.  Sensitivity is 100% (79-100%).

Effectiveness study of PERC in an academic ED (Carolinas).  183 suspected PE patients, PERC was applied to 114, 65 of whom were PERC negative.  16 of the PERC negative underwent CTA, all negative.  14 day follow-up of the remaining 49 also indicated no further PE diagnosis.  No sensitivity calculation.

Retrospective application of PERC to prospective PE cohort in Switzerland.  Prevalence of PE was 21.3% in 1,675 patients.  In the 221 patients who were PERC negative, 5.4% had PE (3.1 - 9.3%) for a sensitivity of 96.6 (94.2 - 98.1%).  The subset of PERC negative who were also low-risk by Geneva Score actually had a higher incidence of PE at 6.4%.

Summary
So, PERC can only be applied to a population you think is low-risk for PE - for which you can use clinical gestalt or Wells' - because it looks like Wells low-risk is 1.3% (0.5-2.7%) to 2% (0-9%).  But, you can't use Geneva because that prevalence is closer to 8% for low-risk - and that's essentially what the Swiss study shows.

But in this already very low-risk population, the question is, what is the role of PERC?  Clinical gestalt in their original study actually worked great.  Even though clinicians were only asked to risk stratify to <15%, they risk stratified to 3.0% prevalence of PE.  Which, of course, means our estimation of the true risk of pulmonary embolism is absolutely bonkers.  If you take a gestalt or Wells' low-risk population, apply PERC, and it's negative - your population that nearly universally didn't have a PE still doesn't have a PE, and it doesn't get you much in absolute risk reduction.  You probably shouldn't have even considered PE as a diagnosis other than for academic and teaching reasons if they're Wells' low-risk and PERC negative.

Then, if you take the flip side - what happens if your patient is PERC positive?  You have a low-risk patient whose prevalence for PE is probably somewhere between 1 and 5%, and now you've got a test with a positive LR of 1.24 - it barely changes anything from a statistical standpoint.  Then, do you do a d-Dimer, which has a positive LR between 1.6 and 2.77?  Now you've done a ton of work and painted yourself into a corner and you have to get a CTA on a patient whose chance of having a PE is still probably less than 10%.

That's where your final problem shows up.  CTA is overrated as a diagnostic test for pulmonary embolism.  In PIOPED II, published 2006 in NEJM, CTA had 16 false positives and 22 true positives in their low risk cohort - 42% false positive rate - and this is against a reference standard for which they estimated already had a 9% false positive and 2% false negative rate.  CTA is probably better now than it once was, but it still has significant limitations in a low-risk population - and I would argue the false positive rate is even higher, given the increased resolution and ability to discern more subtle contrast filling defects.

So, this is what I get out of PERC.  Either you apply it to someone you didn't think had PE and it's negative and you wonder why you bothered to apply it in the first place - or you follow it down the decision tree and you end up at a CTA for whom you can flip a coin to believe whether the positive result is real or not.

And, I don't even want to get into the clinical relevance of diagnosis and treatment of those tiny subsegmental PEs we're "catching" on CTA these days.

"Clinical criteria to prevent unnecessary diagnostic testing in emergency department patients with suspected pulmonary embolism"
www.ncbi.nlm.nih.gov/pubmed/15304025

"Prospective multicenter evaluation of the pulmonary embolism rule-out criteria"
www.ncbi.nlm.nih.gov/pubmed/18318689

"Assessment of the pulmonary embolism rule-out criteria rule for evaluation of suspected pulmonary embolism in the emergency department"
www.ncbi.nlm.nih.gov/pubmed/18272098

"The Pulmonary Embolism Rule-Out Criteria rule in a community hospital ED: a retrospective study of its potential utility"
www.ncbi.nlm.nih.gov/pubmed/20708891

"Prospective Evaluation of Real-time Use of the Pulmonary Embolism Rule-out Criteria in an Academic Emergency Department"
www.ncbi.nlm.nih.gov/pubmed/20836787

"The pulmonary embolism rule-out criteria (PERC) rule does not safely exclude pulmonary embolism"
www.ncbi.nlm.nih.gov/pubmed/21091866

"Multidetector Computed Tomography for Acute Pulmonary Embolism"
www.ncbi.nlm.nih.gov/pubmed/16738268

"D-Dimer for the Exclusion of Acute Venous Thrombosis and Pulmonary Embolism"
www.ncbi.nlm.nih.gov/pubmed/15096330

http://www.mdcalc.com/perc-rule-for-pulmonary-embolism

Monday, August 22, 2011

ACI-TIPI For Predicting Cardiac Outcomes

In an earlier post, I noted an article that had done a systematic review finding 115 publications attempting to create or validate clinical prediction rules for chest pain.  Well, here's number 116.

The ACI-TIPI (Acute Cardiac Ischemia Time-Insensitive Predictive Instrument) is computerized analysis software that generates a score regarding the likelihood of cardiac ischemia based on age, gender, chest pain and EKG variables.  It's actually a product marketed and sold by Philips.  These authors tried to evaluate how predictive this instrument was for predicting 30-day events, with an interest in identifying a group that could be safely discharged from the Emergency Department.

In an institution with 55,000 visits a year, the authors recruited only 144 chest pain patients - which is the first red flag.  It doesn't matter how good your prediction rule is if you only recruit 144 patients - your confidence intervals will be terrible, and their sensitivities for identifying 30 day cardiac outcomes are 82-100% at best.  And, yes, they did say if the ACI-TIPI score is <20, it had a purportedly useful negative predictive value.

So, I suppose this paper doesn't really tell us much - and even if the data were better, I'm not sure the sensitivity/specificity of this ACI-TIPI calculation would meet a useful clinical threshold to reduce low-risk hospitalizations any better than clinical gestalt.  I'll be back with you when I find risk-stratification attempt 117....

"Prognostic utility of the acute cardiac ischemia time-insensitive predictive instrument (ACI-TIPI)"
www.intjem.com/content/4/1/49

Saturday, August 20, 2011

Sometimes, The Pregnancy Test Lies

A couple years ago, my hospital pulled the POC urine pregnancy tests from the ED because of false negatives - leading to incredulous discussions of how it was possible for a nursing assistant to screw up something so simple as a dichotomous colormetric test.

Well, at Washington University, when they had multiple issues with their POC pregnancy test, they investigated the issue in more depth, and this nice little article is an overview of the limitations of the the test.  There are two ways the POC test fails:
 - Not pregnant enough.
 - Too pregnant.

We all know about sensitivity in early pregnancy really only being 97% or so at one week, and no one will fault the test for that.  However, their case series of five patients, all of whose serum hCG was >130,000, are hypothesized to have saturated the reagent to the point of a false-negative test.

In any event, interesting article about something I hadn't put much thought into.

"'Hook-Like Effect' Causes False-Negative Point-Of-Care Urine Pregnancy Testing in Emergency Patients"
http://www.ncbi.nlm.nih.gov/pubmed/21835572

Friday, August 19, 2011

CT Coronary Angiography Proves People WIth CAD Die Sooner

This is a neat study that followed up 23,854 patients from a multicenter CTCA registry - the CONFIRM registry - over three years to evaluate their long term prognostic risk.  And - amazingly enough - the patients who had no coronary artery disease identified on their CTCA had an annualized rate of 0.28% of death from all causes.  Which seems pretty impressive, and it's better than the people who had non-obstructive and various types of obstructive CAD on their CTCA.

But then, the hazard ratios for patients who had 3-vessel and left main disease on their CTCA was still only as high as six times more likely than the no CAD cohort - which is a lot higher in relative terms, but still not very high in absolute terms - and there were a lot of other comorbidities in these patients that would contribute to their all-cause mortality from non-cardiac causes.  So, yes, not having CAD - as well as being a generally healthy person - helps you live longer.

The question still remains where CTCA fits into an Emergency Department evaluation for chest pain.  We are seeing more and more research now that primary PCI for asymptomatic lesions isn't any survival benefit over medical management - so identifying these lesions and admitting these patients to cardiology for intervention isn't going to be in our future.  Considering over 55% of their cohort had either non-obstructive or obstructive disease found, now you're going to be on the hook for making outpatient CAD risk-modification decisions after cardiology declines them.

Whether CTCA is used should be a standardized, institution-wide decision, because I don't think anyone wants to take the weight of sorting through all this evidence and risk/benefit ratios as a lone wolf.

"Age- and Sex-Related Differences in All-Cause Mortality Risk Based on Coronary Computer Tomography Angiography Findings"
www.ncbi.nlm.nih.gov/pubmed/21835321

Wednesday, August 17, 2011

CT Use Is Increasing(ly Justified?)

Retrospective cohort analysis based off the NHAMCS dataset, with all the inherent limitations within.

We have a 330% increase in the use of CT in the Emergency Department - up from 3.2% in 1996 to 13.9%  in 2007.  This increase is pretty stable across all age groups (including a rate of up to nearly 5% now in patients under 18 years of age).  The interesting part of the paper that's something we didn't already know, is their data regarding the adjusted rate of hospitalization or transfer after receiving CT.  In 1996, 26% of patients receiving a CT were admitted to the hospital, while now only 12% of patients receiving CT are admitted to the hospital.

The problem is, I've seen news organizations running with the conclusion: CT rates might be higher, but since the relative risk of hospitalization is lower after a CT, therefore, it must be preventing hospitalizations.  But, you can't draw any such conclusion from the data - particularly considering hospitalizations have climbed over that same period.

We just aren't seeing any data that links the increase in CT use to improved outcomes.  Increased CT usage certainly has its place as the standard of care in many instances, but there's no silver lining to this 330% increase.

"National Trends in Use of Computer Tomography in the Emergency Department."
www.ncbi.nlm.nih.gov/pubmed/21115875

Tuesday, August 16, 2011

Viral or Bacterial Infection? A Blood Test

This is another "someday, in the future" article that made the rounds with the news releases yesterday - where, supposedly, within a few hours of infection, there are significant differences in phagocyte chemiluminescence that allow researchers to differentiate between viral and bacterial infections.

As usual, the breathless commentary is a little ahead of the actual research results.  What the authors did was a data-mining experiment from 69 patients, each of whom had been diagnosed (through standard clinical practice) with either a viral infection, or a bacterial infection.  They ran all the polymorphonuclear leukocytes through their assay, recorded several different sorts of chemoluminescence, and then let computer software do a partitioning analysis to determine the most predictive patterns for bacterial and viral infections.

The software trained to 94.7% accuracy on the "knowns", and then, when tested on a confusion sample with 18 "unknowns" it was 88.9% accurate.

So, still not good enough for clinical use as a dichotomous result, but if it were allowed to return an equivocal range that quantified the assay uncertainty, then perhaps it could have a role in clinical practice.  In theory, an assay such as this might otherwise reduce additional testing and help reduce the number of viral infections that receive antibiotics.

"Differentiation Between Viral and Bacterial Acute Infectious Using Chemiluminescent Signatures of Circulating Phagocytes"
http://www.ncbi.nlm.nih.gov/pubmed/21517122

Sunday, August 14, 2011

Tranexamic Acid - Critique of CRASH-2


These authors review the literature regarding TXA and it's cost/risk/benefit for hemostatic control of injured trauma patients.  Of course, this specifically means they review the single significant piece of literature for TXA - the CRASH-2 trial published in the Lancet.

I'm not sure I entirely agree with their premise that TXA is safer because it is just an antifibrinolytic rather than an activator of clotting/platelet aggregation - clot formation and breakdown is a dynamic process and any interference in that system carries a risk.  But, they do a fairly detailed look at TXA and the CRASH-2 trial, and I think they make a fair and defensible point that, while the NNT is pretty high, it is a fairly low cost intervention with a relevant outcome variable of overall mortality.

While a study with 20,000 patients is a nice start, I'd still like to see at least one other prospective study replicating similar results with an appropriate safety analysis.

"Tranexamic Acid for Trauma Patients: A Critical Review of the Literature"
www.ncbi.nlm.nih.gov/pubmed/21795884

Saturday, August 13, 2011

We Still Can't Predict Cardiac Outcomes in Syncope


The authors of this article claim that the San Francisco Syncope Rule - which we've already put out to pasture - has simple EKG criteria that "can help predict which patients are at risk of cardiac outcomes".

And, they're only possibly partly right.  Out of the 644 patients in their cohort they followed for syncope, they had 42 cardiac events within their 7-day follow-up period.  Of those 42, 36 met the criteria for "abnormal EKG".  If you had a completely normal EKG, it was 6 out 428 that had a cardiac event, which gave them a 99% NPV upon which they base the quoted statement above.

But the positive criteria wasn't adequately predictive enough to be helpful in making hospitalization decisions - 216 patients had abnormal EKGs, but only 36 had a cardiac outcome.  And then, there are significant differences in the patients who had abnormal EKGs, and even more differences with the patients who had cardiac outcomes - the cardiac outcome cohort had an average age of 78.6 compared to the noncardiac outcome cohort average age of 61.0, with probably even more comorbid differences they don't tell us about.

So, a normal EKG is probably helpful in making your decision - but being younger and healthier probably accounts for more of the differences between their groups.

"Electrocardiogram Findings in Emergency Department Patients with Syncope"
www.ncbi.nlm.nih.gov/pubmed/21762234

Thursday, August 11, 2011

CT Is No Longer Adequate To Clear C-Spine

The insanity never stops.  It's a good thing MRI is becoming increasingly available, because the more papers like this are published in major journals, the more we're going to be stuck following every possible outcome to it's bitterest end with the strongest microscope we have.

There a lots of problems with using this paper to change practice - of their 9152 patients undergoing CT for trauma, 741 had persistent midline tenderness leading towards MRI.  Of those 741, only 174 were enrolled for a variety of reasons.  And this study doesn't tell us enough useful information to help distinguish the characteristics of the 78 patients in whom an injury was detected to help us differentiate them from the patients in whom no injury was detected.

But the fact remains, they identified serious injuries on MRI in patients who had negative CTs - and not just obtunded, intubated, polytrauma patients like in the other studies.

Just one more thing to worry about.

"Cervical Spine Magnetic Resonance Imaging in Alert, Neurologically Intact Trauma Patients With Persistent Midline Tenderness and Negative Computed Tomography Results"
http://www.ncbi.nlm.nih.gov/pubmed/21820209

Wednesday, August 10, 2011

The Slow Death of the Lumbar Puncture

As modern CT scanners become more sensitive, the ability of scanners to discriminate smaller and small abnormalities - such as spontaneous aneurysmal subarachnoid hemorrhage - continues to increase.  This BMJ paper makes another case for forgoing lumbar puncture in patients with a negative CT scan.

Specifically, they say that all the SAH in their cohort was picked up by a 3rd generation scanner as long as the scan was performed within six hours of headache onset.  Unfortunately, this is another one of those studies that uses follow-up as a proxy for the gold standard evaluation - only half of their enrolled cohort underwent lumbar puncture.  They followed up their patients for six months, but survival at six months doesn't rule out pathology, it only rules out death from that specific pathology, and only if an autopsy was performed.

But, CT scan is starting to get close to the point where the false negatives of CT are equivalent to the false positives of the lumbar puncture - and I would imagine the costs and harms to the patient begin to approach equivalence.  It definitely changes the equation for your patients when you come back with a negative CT scan and your patient wants to know what the chances are they really need this lumbar puncture.

"Sensitivity of Computer Tomography Performed Within Six Hours of Headache For Diagnosis of Subarachnoid Haemorrhage: Prospective Cohort Study"
www.ncbi.nlm.nih.gov/pubmed/21768192

Monday, August 8, 2011

High-Risk Discharge Diagnoses

Good news - only 0.05% of your discharged patients will meet an untimely end within 7 days of the Emergency Department visit.  Not a frightening number, but definitely enough to keep you on your toes.

It's a retrospective Kaiser Health System cohort of 728,312 visits across two years, and the authors calculated the base rate of 50 per 100,000, as well as looking at other features and discharge diagnoses that increased the OR for death within 7 days.  And, even the sickest, most elderly have OR that are low enough that you're still going to have good outcomes the overwhelming preponderance of the time.  Age greater than 80 gives an OR of 10.6 and a score >3 on the Charlson Comorbidity Index gives an OR of 6.7.  As for the diagnoses they found that are most highly associated with bad outcomes - the only two with OR great than 5 are noninfectious lung disease (OR 7.1) and renal disease (OR 5.6).  These are kind of interesting buckets of diagnoses, specifically in the sense regarding how nonspecific they are - which the authors attribute to diagnostic uncertainty.  I.e., the reason why patients had bad outcomes with "noninfectious lung disease" is because clinicians missed finding the specific morbid diagnosis in these patients.

I don't think this is practice-changing news, since these rates are so low in general that additional testing and hospitalization will harm more people than these missed diagnoses - but it's an interesting number crunch article.

"Patterns and Predictors of Short-Term Death after Emergency Department Discharge"

Sunday, August 7, 2011

Against Medical Advice

This is a nice review article that shows a mix of different issues associated with signing a patient out AMA.  It's a strange practice environment we have here, where EM is turning into an increasingly customer-centric practice specialty - yet unless we have airtight documentation, our customers can litigate against us for the choices they make.

In principle, our patients have the autonomy to make their own decisions - but our cultural values have drifted away from accepting responsibility for our actions.  To best protect ourselves, the authors recommend using a specific AMA form - not because having the patient's signature on a form confers any extra legal protection, but because it's a structured document that helps remind clinicians to document the two key elements of the AMA:  that the patient had medical capacity to make the decision, and that the patient was adequately informed of the risks.   After you satisfy both those conditions, the key is simply complete documentation in the medical record, and you should be afforded some protection given the patient has now terminated the legal duty to treat and assumed the risk for further poor outcomes.

"The Importance of a Proper Against-Medical-Advice (AMA) Discharge"
www.ncbi.nlm.nih.gov/pubmed/21715123

Friday, August 5, 2011

Physicians Will Test For PE However They Damn Well Please

Another decision-support in the Emergency Department paper.

Basically, in this study, an emergency physician considered the diagnosis of pulmonary embolism - and a computerized intervention forced the calculation of a Wells score to help guide further evaluation.  Clinicians were not bound by the recommendations of the Wells calculator to guide their ordering.  And they sure didn't.  There were 229 patients in their "post-intervention" group, and 26% of their clinicians said that evidence-based medicine wasn't for them, and were "non-compliant" with their testing strategy.

So, did the intervention help increase the number of positive CTAs for PE?  Officially, no - their trend from 8.3% positive to 12.7% positive didn't meet significance.  Testing-guideline complaint CTA positivity was 16.7% in the post-intervention group, which, to them, validated their intervention.

It is interesting that a low-risk Wells + positive d-Dimer or high-risk Wells cohort had only a 16% positive rate on a 64-slice CT scanner - which doesn't really match up with the original data.  So, I'm not sure exactly what to make of their intervention, testing strategy, or ED cohort.  I think the take home point is supposed to be, if you you can get evidence in front of clinicians, and they do evidence-based things, outcomes will be better - but either this just was too complex a clinical problem to tackle to prove it, or their practice environment isn't externally valid.

Thursday, August 4, 2011

Should Rural Health Care Be Equivalent?

"All residents in the United States should have access to safe, high-quality health care and should have confidence in the health care system regardless of where they live."

That is the final statement of the accompanying editorial to the JAMA article documenting superiority in outcomes in urban hospitals vs. critical care access rural hospitals for acute MI, CHF, and pneumonia.  The acute MI study population is slightly more ill at baseline in the rural hospital sample, but the groups are otherwise similar.  Raw mortality is higher for AMI (26.1% vs 23.9% adjusted), CHF (13.4% vs. 12.5%) and pneumonia (13.0% vs. 12.5% [not significant]) favoring urban hospitals.

The key feature - critical access hospitals were less likely to have ICUs, cardiac cath, surgical capabilities, and had reduced access to specialists.  Is it any wonder their outcomes are worse?  As someone who moonlit in one of these hospitals as a resident, I can guarantee the standard of care in a rural setting is lower.

But, coming back to the original supposition - is it realistic to dedicate the funding and resources to bring rural hospitals up to the standard?  To equip far-flung hospitals with the same standard of care as urban settings to cover the remaining 20% of the population is likely simply an unfeasible proposition.  Living in rural areas is simply going to come with the risks associated with unavoidable delays in care and reduced access to specialists and technology.

"Quality of Care and Patient Outcomes at Critical Access Rural Hospitals"
www.ncbi.nlm.nih.gov/pubmed/21730240
"Critical Access Hospitals and the Challenges to Quality Care"
www.ncbi.nlm.nih.gov/pubmed/21730248

Tuesday, August 2, 2011

It's Impossible To Catch All Pediatric Pneumonia

Another glass half-full vs half-empty, depending on how you read it.  Their editor capsule summary says "Children without hypoxia, fever, and ausculatory findings are low risk."  The numbers say - in the absence of hypoxia, fever, or focal ausculatory findings, radiographic pneumonia was seen in 7.6% (CI 5.3-10.0).  Interesting numbers that, to me, say that pediatric pneumonia is still a black box of uncertainty.

However, what the authors call "definite" pneumonia was only 2.9% in the absence of those findings, and the editor's capsule conclusion is that low-risk patients are best served by follow-up rather than radiology.  And, this is where the half-full/half-empty comes in - because a lot of EPs don't want to the guy that sends home pneumonia even in a "low risk" situation, given than 30% of their pneumonia diagnoses required admission.  I'd rather take the half-full approach - recognizing that the majority of radiographic pneumonias are viral anyway, and, if the patient has adequate follow-up and tunes up nicely, do my best to avoid unnecessary testing in a low pretest probability setting that will end up with more false positives and unnecessary antibiotics.

"Prediction of Pneumonia in a Pediatric Emergency Department"

Monday, August 1, 2011

Does EHR Decision Support Make You More Liable?

That's the question these JAMA article authors asked themselves, and they say - probably.  The way they present it, it's probably true - using the specific example of drug-drug interactions.  If you put an anticoagulated elderly person on TMP-SMX and they come back a few days later bleeding with an INR of 7, you might be in trouble for clicking away the one important drug alert out of the one hundred you're inundated on your shift.  The authors note how poorly designed the alerts are, how few are relevant, and "alert fatigue" - but really, if you're getting any kind of alerts or have any EHR tools available to you during your practice, each time you dismiss one, someone could turn it around against you.

The authors potential solutions are an "expert" drug-drug interaction list or legislative legal safe harbors.

"Clinical Decision Support and Malpractice Risk."
www.ncbi.nlm.nih.gov/pubmed/21730245

Friday, July 29, 2011

"Narcotic Bowel Syndrome"

I had never heard this specific diagnosis bandied about in an Emergency Medicine context - but, essentially, it's a gastroenterology entity (and diagnosis of exclusion) that entails, essentially, chronic, intractable, crampy abdominal pain of unknown etiology and concurrent narcotic use.  I can't even describe how many of these patients I saw each shift during residency - and how many of those people had multiple CT scans in the past year.  The key feature in this particular diagnosis, as described in their case, is they had extensive follow-up evaluation, were weaned from their narcotics, and had resolution of symptoms.

I think this is a diagnosis spectrum we see a lot in the ED - whether it be constipation, IBS, cyclic vomiting syndrome, "feeling sick", or the multitudinous abdominal pain of unknown etiology.  With more and more patients being prescribed (or secretly taking) narcotics, what we see in our EDs is not just the overdose emergencies, but the various side effect spectrums of dependence and withdrawal.

You'd think that with all our medical technological prowess we'd have better mechanisms to treat pain than they did thousands of years ago.

"Narcotic Bowel Syndrome"
http://www.ncbi.nlm.nih.gov/pubmed/21719232

Thursday, July 28, 2011

Endotracheal Tube Verification Via Ultrasound

I think I've discovered the new paradigm of research in ultrasound.  Every time you do a procedure or make a diagnosis, slap the ultrasound on someone and see if you can reliably identify anatomic changes.

It looks like, with their practiced ultrasonographers, that they can get some preliminary information regarding endotracheal tube placement by performing transtracheal ultrasound.  Their "gold standard" was waveform capnography - which is a fair gold standard, but not universally sensitive and specific for tube placement in all clinical situations.  Essentially, if the ETT is in the correct place, there is only one "air-mucosal interface" observed with high-frequency linear probe, and, if the ETT is in the esophagus, you have a second, posterior air-mucosal interface.

Seems reasonable.

Experts did it correctly with 99% sensitivity and 94% specificity, and the main advantage was speed.

"Tracheal rapid ultrasound exam (T.R.U.E.) for confirming endotracheal tube
placement during emergency intubation."

Tuesday, July 26, 2011

Online Publishing of ED Wait Times

When a small city only has two Emergency Departments, you can run a study like this to see what effect publication of ED wait times has on visits.

While it is fabulously logical that if 18 to 40 people a day are looking at your Emergency Department wait times that some portion of those people will choose a facility with a shorter wait time - or choose not to come to the ED at all - or choose to come in when they might not have otherwise come in if the wait time is short - this study doesn't actually try to study the population of interest.  They need to somehow capture individuals who are using the published information to make decisions, rather than looking generally at their overall wait time statistics - because, even though they say their results "were consistent with the hypothesis that the publication of wait time information leads to patients selecting the site with shorter wait time", they are making a huge unsubstantiated leap.

Looking at their descriptive statistics, hardly anything changed to actually justify their conclusions, and, really, it looks like patients just based their decisions pretty heavily on which of the two hospitals was closer - particularly Victoria Hospital, which people only went to if it was nearer.  I do also find it fascinating that their mean wait time rose from about 105 minutes to 115 minutes, yet the amount of time their wait time was >2 hours (120 minutes) actually dropped from 13% to 9%.  This is how they justify their conclusion that the "spikes" are mitigated by online usage - and it may be true - but there are too many moving parts and they aren't actually asking people if they used the website and used the information from it.

"The effects of publishing emergency department wait time on patient utilization patterns in a community with two emergency department sites: a retrospective, quasi-experiment design."
http://www.ncbi.nlm.nih.gov/pubmed/21672236

Monday, July 25, 2011

Facebook, Savior of Healthcare

This is just a short little letter I found published in The Lancet.  Apparently, the Taiwan Society of Emergency Medicine has been wrangling with the Department of Health regarding appropriate solutions to the national problem of ED overcrowding.  To make their short story even shorter, apparently, they ended up forming a group on Facebook, and then posting their concerns to the Minister of Health's Facebook page.  This then prompted the Minister of Health to make surprise visits to several EDs, and, in some manner, the Taiwanese feel their social networking has led to a fortuitous response to their public dialogue.

So, slowly but surely, I'm sure all these little blogs will save the world, too.

"Facebook use leads to health-care reform in Taiwan."
http://www.ncbi.nlm.nih.gov/pubmed/21684378