Getting Chronic Pain Out of the Emergency Department

This is yet another entry into the parade of narcotic-overuse articles – but, at least, this one shines some light on potential solutions.

Their introduction is full of the standard lovely doom-and-gloom statistics:

  • From 1999 to 2008 there was a concomitant increase in prescription drug abuse with reported opioid overdose deaths tripling.
  • Health care providers wrote 259 million prescriptions for painkillers in 2012, enough for every American adult to have a bottle of pills.
  • Approximately 16,000 deaths in the United States (U.S) are attributed to prescription opioid overdose annually.

Their report is simply a single-center pre- and post-intervention study.  They look backwards at, specifically, 46 patients identified as “high utilizers” with chronic pain and prior documentation of Emergency Department misuse.  Each of these patients was evaluated in concert with their primary care physician or chronic pain specialist, and a specific follow-up plan-of-care was established.  Patients were all informed of their care plan, the most salient portion being narcotics and benzodiazepines would be essentially omitted from any Emergency Department visit.

Patient visits to the ED declined from an average of 6.2 in the six-month pre-intervention period to 2.2 in the post-intervention period.  These data would also indicate a more profound effect excepting for one patient whose change in treatment plan paradoxically increased his ED visits four-fold, resulting in nearly 40 ED visits in the post-intervention period.  Unfortunately, these data only reflect ED use of one hospital, and does not indicate whether visits to other EDs were affected.

The reduction in ED visits is certainly apparently favorable – but, better yet, the median number of narcotic pills, as recorded in the state database, prescribed to each patient dropped from 664 to 471.  Changes in pill use, however, were much more heterogenous – and could be confounded by pill prescribing in closely neighboring states.  And, regardless of the improvement, pain management clearly continued to consist of fairly robust quantities of narcotics.

It’s at least a start, and, definitely some improvement.  If your system has the resources to develop care plans for individual patients, the benefits likely outweigh the drawbacks.

“Impact of a Chronic Pain Protocol on Emergency Department Utilization”
http://www.ncbi.nlm.nih.gov/pubmed/26910248

Let’s Replace: Droperidol; With: Olanzapine

The U.S. has been suffering acute droperidol-emia for quite some time, now.  Around the time ondansetron was exclusively on-patent, the FDA “coincidentally” published a “black box” for this otherwise universally beloved medication.  This led to restrictions in use at many institutions.  Then, recently, it’s simply no longer available from any local manufacturer.  It is a sad world, indeed.

At times, I have replaced its use with haloperidol.   Droperidol and haloperidol are, of course, both typical butyrophenone antipsychotics – but the complex metabolism of haloperidol leads to some unpredictability in sedation and effect.  Despite being approved only for oral and intramuscular use, I have occasionally used olanzapine intravenously for headache in the Emergency Department – and, now, the fine folks at Hennepin County have published their robust experience.

This is simply a retrospective review of its use at their institution, with an eye towards safety events, not effectiveness.  Six months of data, comprising 713 patients, were manually reviewed by two co-authors.  The most common usages for olanzapine were agitation (34.4%), abdominal pain (23.1%), headache (17.0%), nausea & vomiting (15.0%), and non-specific pain (8.4%).  Many patients reviewed also received co-administration of olanzapine with other sedating medications, including opiates, benzodiazepines, and ketamine.

Unfortunately, this study ultimately does not provide terribly insightful data regarding olanzapine use.  Only 20 patients had an EKG before and after administration, and median increase in QTc was 12ms.  Four patients suffered akasthisia “likely” or “possibly” related to olanzapine.  About 10% of patients required supplemental oxygen for respiratory depression and hypoxia.  Then, seven patients required intubation and two patients died during their subsequent hospitalizations.  Each of these serious adverse outcomes was multifactorial.

So, there is no simple answer regarding olanzapine.  Just like the case series behind the black box for droperidol, olanzapine was used in many patients with significant comorbid physiology or pharmacology.  Without a control or comparator group, this study cannot address the comparative efficacy and safety of any potential alternative agents.  It is probably reasonable to consider olanzapine as an option in situations where droperidol was previously used, but its effectiveness and true safety remains unknown.

“A Large Retrospective Cohort of Patients Receiving Intravenous Olanzapine in the Emergency Department”
http://www.ncbi.nlm.nih.gov/pubmed/26720055

Changing Clinician Behavior For Low-Value Care

I’ve reported in general terms several times regarding, essentially, the shameful rate of inappropriate antibiotic prescribing for upper respiratory infections.  Choosing Wisely says: stop!  However, aggregated data seems to indicate the effect of Choosing Wisely has been minimal.

This study, from JAMA, is a prospective, cluster-randomized trial of multiple interventions in primary care practices aimed at decreasing inappropriate antibiotic use.  All clinicians received education on inappropriate antibiotic prescribing.  Then, practices and participating clinicians were randomized either to electronic health record interventions of “alternative suggestion” or “accountable justification”, to peer comparisons, or combinations of all three.

The short answer: it all works.  The complicated answer: so did the control intervention.  The baseline rate of inappropriate antibiotic prescribing in the control practices was estimated at 37.1%.  This dropped to 24.0% in the post-intervention period, and reflected a roughly linear constant downward trend throughout the study period.  However, each different intervention, singly and in combination, resulted in a much more pronounced drop in inappropriate prescribing.  While inappropriate prescribing in the control practices had reached mid-teens by the end of the study period, each intervention group was approaching a floor-level in the single digits.  Regarding safety interventions, only one of the seven intervention practice clusters had a significantly higher 30-day revisit rate than control.

While this study describes an intervention for antibiotic prescribing, the basic principles are sound regarding all manner of change management.  Education, as a foundation, paired with decision-support and performance feedback, as shown here, is an effective strategy to influence behavioral change.  These findings are of critical importance as our new healthcare economy continues to mature from a fee-for-service free-for-all to a value-based care collaboration.

“Effect of Behavioral Interventions on Inappropriate Antibiotic Prescribing Among Primary Care Practices”
http://www.ncbi.nlm.nih.gov/pubmed/26864410

ED Recidivism And/Or “Quality”

I’ve only worked in a handful of Emergency Departments – but at each institution, 72-hour Emergency Department recidivism has been tracked.  The simple act of bothering to track such events implies a very simple conclusion: these revisits somehow reflect poor care, missed diagnoses, or other opportunities to prevent return visits.  At a minimum, it’s best not to be an outlier at the high end.

These authors perform a retrospective evaluation of Healthcare Cost and Utilization Project data from New York and Florida, looking specifically at the outcomes of patients returning to the Emergency Department after an index visit.  Based on approximately 9 million Emergency Department visits, these authors found recidivism starting at 8.2% by 7 days and increasing to 16.6% within 30 days.  The proportion of re-visits resulting in an admission to the hospital was stable at ~14.5% across the time period.  Patients with the greatest number of ED visits per year were the most likely to return, and the most likely to be admitted.  Interestingly, only approximately one-quarter of the revisits were identified as for the same condition as their index visit.

The authors’ analysis focuses on comparing the outcomes of patients admitted at an index visit, re-admitted after an ED visit, and those re-admitted after a discharge from the hospital, including ICU admission, length of stay, mortality, and hospital costs.  For what little insight it gives us, these outcomes tended to favor those discharged from the ED – although discharged patients were obviously younger and healthier at baseline than those who were analyzed as hospital readmissions.

These data – given the limitations of their source – do very little to inform any conclusions regarding the underlying processes at work.  And, in essence, by lacking such insight, these data help support the conclusions of the authors: Emergency Department recidivism should not be used as a quality measure.  This level of administrative data whitewashes any clues regarding the etiology of re-visits: are they misdiagnoses?  Are they high healthcare utilizers with chronic problems?  Is system access to primary care inadequate?  Are they scheduled returns for wound care?  Were these patient appropriately given trials of outpatient therapy with an expected failure rate?  Were they simply just very satisfied patients returning to their new favored location for care?  The overall recidivism rate, with all these confounders, is such a poor surrogate for possible missed diagnosis – and whether such missed diagnoses truly represent “low quality” care – that the opacity of the data presented by these authors proves its inadequacy.

Even more importantly, this is excellent context with which to review the proposed Clinical Emergency Data Registry quality measures. Do they accurately reflect the underlying quality of care?  Can they be reliably and accurately measured with little impact on workflow and care delivery?  Capturing data on care delivery is an important part of improving our specialty, but this draft requires substantial feedback.

“In-Hospital Outcomes and Costs Among Patients Hospitalized During a Return Visit to the Emergency Department”
http://jama.jamanetwork.com/article.aspx?articleid=2491638

Missed a Stroke? You’re Not Alone

It’s easy to fall prey to the quality assurance shaming associated with your hospital’s stroke team.  It’s nearly impossible to find the right balance between over-triage of any remotely neurologic complaint, and getting the inevitable nastygram follow-ups resulting from unexpected downstream stroke diagnoses.

Take heart: it’s not just you.

This retrospective review of evaluated patients discharged with a diagnosis of acute stroke at two hospitals – one an academic teaching institution, and one a non-teaching community hospital.  All patients discharged with such a diagnosis were reviewed manually by a neurologist, and charts were analyzed specifically to quantify the frequency with which an Emergency Physician did not initially document acute stroke as a possible diagnosis, or a consultant neurologist did not make a timely diagnosis of stroke when asked.

Out of 465 patients included in their one-year review period, 103 of strokes were missed – 22% of those at the academic institution and 26% of those at the community hospital.  And, again, take heart – 20 of 55 patients missed at the academic institution were neurology consults for acute stroke, but were initially misdiagnosed by our neurology consultants, as well.  Posterior strokes were twice as likely to be missed as anterior strokes, and symptoms such as dizziness and nausea and vomiting were more frequent in missed presentations.  Focal weakness, neglect, gaze preference, and vision changes were less frequently missed.

Entertainingly, these authors are mostly verklempt over the fact half of missed stroke diagnoses presented within time windows for tPA or endovascular intervention – although, no other accounting of potential eligibility is presented other than timeliness.

“Missed Ischemic Stroke Diagnosis in the Emergency Department by Emergency Medicine and Neurology Services”
http://www.ncbi.nlm.nih.gov/pubmed/26846858

When Procedural Sedation Goes Wrong

Like any typical Emergency Physician, procedural sedation is a frequent part of my practice.  Each procedure, of course, is preceded by discussion of informed consent – the balance of risks, benefits, and alternatives to the procedure.  The risks of many procedures in medicine are, luckily, quite rare – but just how rare are these risks in procedural sedation?

That’s why I love systematic reviews like these – so I can be substantially more precise in such discussions.  Moreso, if you love forest plots – you’ll really love this article.

The high points:

  • Agitation associated with procedural sedation is almost entirely the domain of ketamine, at 16% pooled incidence.  “Ketofol” brings it down to 4.8%, and the remainder are ~1 in 1000.
  • Aspiration was observed once in a pooled multi-agent sample of 2,370 patients.
  • Bradycardia was witnessed essentially only in one study using etomidate – otherwise ~1 in 250.
  • Hypotension is most frequent with propofol, but still only 2%.  The rest fall around 1% or less.
  • Hypoxia was a little harder to pin down, with most agents’ results skewed by outlier studies.  5% is probably reasonable for propofol, while ketamine-containing protocols are probably ~1% or lower.
  • Laryngospasm was witnessed ~1 in 1,500.

It’s worth scanning through their detailed visualizations of their results to get a feel for how different agents compare.  For what its worth, these data general support my practice of using mostly propofol, ketamine, or combinations of each.

“Incidence of Adverse Events in Adults Undergoing Procedural Sedation in the Emergency Department: A Systematic Review and Meta-analysis”
http://www.ncbi.nlm.nih.gov/pubmed/26801209

The Anecdotal Value of the Physical Exam

In the era of laboratory testing and imaging reliance, the physical examination is often neglected.  And, indeed, for many suspected diagnoses, the physical examination adds little – the positive or negative likelihood ratios associated with specific findings are not sufficient for ruling-in or ruling-out disease.

However, as this study describes, there is at least occasional value in performing a physical examination.  This is simply an e-mailed survey to five thousand clinicians asking for a vignette regarding a delay in diagnosis relating to a missed physical examination finding.  There were 208 responses to the survey meeting inclusion criteria, and, in general, the joy in this article is in the Supplementary Table 1, which includes such gems as:

  • Missed pregnancy with twins before hysterectomy
  • Missed clavicle fracture, labeled “rule out myocardial infarction”
  • Missed previous appendectomy scar and made diagnosis of appendicitis again
  • Missed giant ovarian cyst, labeled as ascites
  • Missed gunshot entrance wound in emergency room

This general canvassing survey provides no information regarding the frequency of such misses, and some of the other 208 responses are not quite as straightforward.  The authors do subjectively note a pattern to some of the responses and suggest:

  • Acutely ill or painful patients should be fully exposed
  • Genital and rectal exams should not be omitted when relevant
  • Don’t forget shingles

They also note the physical examination is a “low-cost procedure”, which is, in part, true.  It is certainly less expensive than most laboratory or imaging procedures.  The scope of the exam dictates a time-cost of a limited physician resource, however, and even a couple extra minutes per patient could result in dramatic decreases in efficiency.  The authors here, while focusing on the “misses”, do not mention the possibility of false-positive findings potentially noticed on a less-focused examination, and the potential downstream resource costs associated with investigation of normal variants.

Future research could provide a better accounting of the true incidence of preventable diagnostic error associated with physical examination deficiencies – and the complex factors predicting the appropriate scope of examination in different settings.

“Inadequacies of Physical Examination as a Cause of Medical Errors and Adverse Events: A Collection of Vignettes”

Let’s Get Inappropriate With AHA Guidelines

How do you hide bad science?  With meta-analyses, systematic reviews, and, the granddaddy of the them all, guidelines.  Guidelines have become so twisted over the recent history of medicine the Institute of Medicine had to release a statement on how to properly create them, and a handful of folks have even gone so far as to imply guidelines have become so untrustworthy a checklist is required for evaluation in order to protect patients.

Regardless, despite this new modern era, we have yet another guideline – this time from the American Heart Association – that deviates from our dignified ideals.  This guideline is meant to rate appropriate use of advanced imaging in all patients presenting to the Emergency Department with chest pain.  This includes, for their purposes, imaging to evaluate nSTEMI/ACS, suspected PE, suspected syndromes of the aorta, and “patients for whom a leading diagnosis is problematic or not possible”.

My irritation, as you might expect, comes at the expense of ACS and “leading diagnosis is problematic or not possible”.  The guidelines weighing the pros and cons of the various options for imaging PE and the aorta are inoffensive.  However, their evaluation of chest pain has one big winner: coronary CT angiograms.  The only time this test is not appropriate in a patient with potential ACS is when the patient has a STEMI.  They provide a wide range of broad clinical scenarios to assist the dutiful reader – all of which are CCTA territory – including as every low/intermediate risk nonischemic EKG and troponin-negative syndrome, explicitly even TIMI 0 patients.

Their justification of such includes citation of the big three – ACRIN-PA, ROMICAT II, and CT-STAT – showing the excellent negative predictive value of the test.  Indeed, the issues with the test – middling specificity inflicted upon low disease prevalence, increased downstream invasive angiography and revascularization of questionable value – are basically muttered under the breath of the authors.  Such dismissive treatment of the downsides of the test are of no surprise, considering Harold Litt, of ACRIN-PA and Siemens, is part of the writing panel for the guideline.  I will, again, point you to Rita Redberg’s excellent editorial in the New England Journal of Medicine, refuting the foundation of such wanton use of CCTA in the emergency evaluation of low-risk chest pain.

The “leading diagnosis is problematic or not possible” category is just baffling.  Are we really trying to enable clinicians to be so helpless as to say, “I don’t know!  Why think when I can scan?”  The so-called “triple rule-out” is endorsed in this document for this exact scenario – so you can use a test whose characteristics for detection of each entity under consideration are just as degraded as your clinical acumen.

Fantastically, both the Society of Academic Emergency Medicine and the American College of Emergency Physicians are somehow co-signatories to this document.  How can we possibly endorse such fragrant literature?

“2015 ACR/ACC/AHA/AATS/ACEP/ ASNC/NASCI/SAEM/SCCT/SCMR/ SCPC/SNMMI/STR/STS Appropriate Utilization of Cardiovascular Imaging in Emergency Department Patients With Chest Pain”
http://www.ncbi.nlm.nih.gov/pubmed/26809772

The Value-Add of Ultrasound to STONE Score

There are a few major questions to be addressed in patients with suspected renal colic:

  • Is there an infection?
  • If there is a stone, will it pass spontaneously or require urologic intervention?
  • If I make a clinical diagnosis without CT, will I miss an important alternative diagnosis mimicking stone?

The STONE score addresses the last question – using a weighted decision instrument to classify patients with suspected stone into low-, moderate-, and high-risk cohorts for ureteral stone disease.  There are some issues with face validity for STONE, and likewise the validation has shown its performance to be somewhat inexact.  However, it helps reinforce gestalt and aids in shared decision-making.

This study adds in point-of-care ultrasound to assess the degree of hydronephrosis.  The hope of these authors was the presence of hydronephrosis would improve the performance of the STONE score by identifying the few patients with stones at the low- and moderate- end, while also using moderate or greater hydronephrosis to predict the need for subsequent urologic intervention.

The answer: only marginally.

Generally, the most useful positive likelihood ratios are above 10, and the most useful negative likelihood ratios are below 0.1.  In this study, only one LR potentially met that criteria.  The presence of moderate or greater hydronephrosis in a patient with a low likelihood of stone disease had a +LR of ~20 for both the presence of stone and for stone disease requiring urologic intervention – but this +LR was based on only a handful of patients, and the 95% CIs range from 4 to 110.

Lastly, did the presence of hydronephrosis rule out any important alternative diagnoses?  No.  Out of 835 patients, there were 54 with an important alternative diagnosis.  There were 11 patients with hydronephrosis plus an important alternative, including 3 appendicitis, 1 cholecystitis, 2 diverticulitis.  The presence of moderate or severe hydronephrosis was helpful, but would not obviate imaging for an alternative diagnosis if indicated.

“STONE PLUS: Evaluation of Emergency Department Patients With Suspected Renal Colic, Using a Clinical Prediction Tool Combined With Point-of-Care Limited Ultrasonography”
http://www.ncbi.nlm.nih.gov/pubmed/26747219