Death From a Thousand Clicks

The modern physician – one of the most highly-skilled, highly-compensated data-entry technicians in history.

This is a prospective, observational evaluation of physician activity in the Emergency Department, focusing mostly the time spent in interaction with the electronic health record.  Specifically, they counted mouse clicks during various documentation, order-entry, and other patient care activities.  The observations were conducted for 60-minute time periods, and then extrapolated out to an entire shift, based on multiple observations.

The observations were taken from a mix of residents, attendings, and physician extenders, and offer a lovely glimpse into the burdensome overhead of modern medicine: 28% of time was spent in patient contact, while 44% was spent performing data-entry tasks.  It requires 6 clicks to order an aspirin, 47 clicks to document a physical examination of back pain, and 187 clicks to complete an entire patient encounter for an admitted patient with chest pain.  This extrapolates out, at a pace of 2.5 patients per hour, to ~4000 clicks for a 10-hour shift.

The authors propose a more efficient documentation system would result in increased time available for patient care, increased patients per hour, and increased RVUs per hour.  While the numbers they generate from this sensitivity analysis for productivity increase are essentially fantastical, the underlying concept is valid: the value proposition for these expensive, inefficient electronic health records is based on maximizing reimbursement and charge capture, not by empowering providers to become more productive.

The EHR in use in this study is McKesson Horizon – but, I’m sure these results are generalizable to most EHRs in use today.

4000 Clicks: a productivity analysis of electronic medical records in a community hospital ED”
http://www.ncbi.nlm.nih.gov/pubmed/24060331

Is Cephalexin Monotherapy Sufficient?

Following-up last week’s publication regarding the efficacy of TMP-SMX monotherapy for skin and soft tissue infections – with specific concern for S. pyogenes resistance – this article takes the opposing approach: cephalexin monotherapy.  Cephalexin and other first-generation cephalosporins have been effectively used for their gram-positive coverage in SSTs for quite some time – right up until they fall flat in the MRSA era.  They have excellent utility against Group A Strep, but lack any activity against MRSA.

This is a prospective, comparative-effectiveness trial of cephalaxin monotherapy vs. cephalexin + TMP-SMX in the treatment of uncomplicated, non-purulent cellulitis.  They enrolled 153 patients, lost 7 to follow-up, and the cure rates were 85% in the dual-therapy group and 82% in the monotherapy group.  Baseline differences between groups were generally small and likely clinically insignificant.  Oddly, almost a quarter of both groups received IV antibiotics at the initial visit.  Regardless, cephalexin monotherapy was non-inferior to cephalexin + TMP-SMX dual-therapy in this small trial.

Of course, as usual, this study excludes all patients with diabetes, immunosuppression, or peripheral vascular disease – which is to say, everyone we realistically see in the Emergency Department.  However, for non-purulent cellulitis in the absence of risk factors for MRSA, it is likely reasonable to continue with first-line cephalexin monotherapy.  It should also be noted these authors used full weight-based dosing schedules for their patients, with adults >80kg receiving 1000mg of cephalexin and TMP-SMX 160/800 each four times daily.

“Clinical Trial: Comparative Effectiveness of Cephalexin Plus Trimethoprim- Sulfamethoxazole Versus Cephalexin Alone for Treatment of Uncomplicated Cellulitis: A Randomized Controlled Trial”
http://www.ncbi.nlm.nih.gov/pubmed/23457080

Bar-Code Scanners in the ED

Welcome to the Emergency Department of the Future.  Soft chimes play in the background.  Screaming children are appropriately muffled.  There is natural light and you can hear the ocean.  Patients and doctors alike are polite and respectful, and a benign happiness seems to radiate from all directions.  A young nurse wafts through the patient care areas with a handheld barcode scanner, verifying and dispensing medications in a timely and accurate fashion.

Everything about that vision is coming to your Emergency Department, everything except the chimes, the quiet, the politeness, and the happiness.  The bar-code scanners, however, perhaps.

This is a pre- and post- study from The Ohio State University regarding their use of handheld scanners for medication verification (BCMA).  Our hospital system uses these throughout the inpatient services to verify and provide decision-support for nurses at the final step of the medication delivery process.  However, given the chaotic nature of the Emergency Department, we have not yet implemented them in that environment.  Ohio State, on the other hand, has forged ahead – requiring all medication administrations be verified by bar-code scanner, excepting a small number of “emergency” medications that may be given via override.  They also excluded patients in their resuscitation areas from this requirement.

Across the 2,000 medication administrations observed in the pre- and post- implementation periods, there were reductions in essentially all types various drug administration errors, leading to 63/996 errors in pre- and 12/982 in the post-.  Therefore, these authors conclude – hurrah!

However, none of these errors were serious – and only one even met criteria for “possible temporary harm”.  The majority of errors were “wrong dose”, and involved sedatives, narcotics, and nausea medications the most.  Certainly, the potential for prevention of a significant drug event may be reduced with this system, but it would require much greater statistical power to detect such an effect.  These authors do not touch much upon any unintended consequences of their implementation – such as delays in treatment, changes in LOS, or qualitative frustration with the system.  A better accounting for these effects would assist in fully assessing the utility of this intervention in the Emergency Department.

“Effect of Barcode-assisted Medication Administration on Emergency Department Medication Errors”
http://www.ncbi.nlm.nih.gov/pubmed/24033623

Reforming Clinical Guidelines

If you’ve been following my various linkaways on this blog over the last couple months, you’ve seen me highlight an investigative report by Jeanne Lenzer regarding some of the controversial recent clinical guidelines.  Beyond that, however, is Part 2 of this project – where a team headlined by Jerome Hoffman, Curt Furburg, and John Ioannidis came up with a set of evaluation criteria for worrisome conflicts-of-interest in clinical guidelines.

These evaluation criteria, a set of “red flags”, chosen over months of debate:

  • Sponsor(s) is a professional society that receives substantial industry funding;
  • Sponsor is a proprietary company, or is undeclared or hidden
  • Committee chair(s) have any financial conflict
  • Multiple panel members have any financial conflict
  • Any suggestion of committee stacking that would pre-ordain a recommendation regarding a controversial topic 
  • No or limited involvement of an expert in methodology in the evaluation of evidence
  • No external review
  • No inclusion of non-physician experts/patient representative/community stakeholders

As you can see, the list includes several types of sponsorship COI, as well as other cautions meant to ensure objectivity and patient-centric recommendations.  Whether this set of “red flags” becomes a useful tool for future guideline evaluation yet remains to be seen.  As should come as a surprise to no one, the new ACEP/AAN tPA clinical policy – evaluated independently of the guidelines Working Group – garners an unimpressive six “red flags” and a “caution”.

William Mallon, David Newman, Kevin Klauer, myself, and many others contributed to this project.

“Ensuring the integrity of clinical practice guidelines: a tool for protecting patients”
http://www.bmj.com/content/347/bmj.f5535

Is TMP-SMX Monotherapy Sufficient?

Caution has traditionally been advised for the use of trimethoprim-sulfamethoxazole for skin and soft-tissue infections.  A significant portion of these infections are caused by Group A Strep – an organism traditionally thought to be resistant to TMP-SMX.

However, these authors feel, with the rising prevalence of MRSA SSTs – for which TMP-SMX provides useful oral spectrum of activity – it is time to re-examine this dogma.  Specifically, they feel the current opinion is based on inappropriate laboratory culture technique that overcomes the anti-bacterial target of TMP-SMX:  thymidine metabolism.  As part of ongoing clinical trials of TMP-SMX monotherapy for SSTs, they collected S. pyogenes isolates from patients and performed susceptibility testing on a variety of media they feel more appropriately reflects in vivo performance.  Of the 200 isolates tested on a variety of media, only one was evaluated by Etest to be resistant to TMP-SMX.  Therefore, these authors conclude TMP-SMX monotherapy may be entirely reasonable.

So, who is right?  These authors – with their new culture media – or the pre-existing dogma?  Unfortunately, the true answer is not yet clear – we need to look at clinical outcomes, not in vitro activity.  One working theory, at least, is infected hosts seem to supply enough thymidine to enable bacteria to negate the mechanism of action for TMP-SMX in vivo.  Only prospective clinical evaluation will provide better direction.  I would not yet suggest TMP-SMX monotherapy for SSTs where Strep were still a possibility.

“Is Streptococcus pyogenes Resistant or Susceptible to Trimethoprim-Sulfamethoxazole?”
http://www.ncbi.nlm.nih.gov/pubmed/23052313

CTCA Is Better Than Wishing & Hoping

CT coronary angiograms have infiltrated the Emergency Department, with trials such as ACRIN-PA, CT-STAT, and ROMICAT II demonstrating their utility – primarily sensitivity – for the rapid detection of coronary artery disease.  It comes as a surprise to no one that an angiogram demonstrating a total absence of coronary artery disease confers an excellent short-term prognosis.  The downside, of course, is cost, contrast, radiation, and the suggestion that low-risk patients might be harmed by additional, unnecessary testing.  However, enthusiasm for the procedure abounds.

This is an observational study from Thomas Jefferson University looking at consecutive Emergency Department patients referred for CTCA, prospectively collecting variables to calculate TIMI and GRACE risk scores.  No clearly defined primary outcome is provided, but it seems these authors aimed to demonstrate that CTCA would correctly detect severe coronary disease (>70% stenosis) and better prognosticate adverse outcomes than the TIMI and GRACE risk scores.  They enrolled 250 patients, lost 29 to follow-up, and reported six adverse cardiovascular events within 30 days – 2 MIs, 2 ACS, and 2 revascularizations.  All six were TIMI 1 or 2, had relatively middling GRACE scores, and had extensive CAD detected on CTCA.  Overall, 17 patients had significant CAD (>50% stenosis) detected, and increasing TIMI and GRACE scores did not correlate with its presence or absence.  Therefore, these authors feel CTCA is an appropriate diagnostic study and is superior to clinical assessment and risk scores.  They even go so far as to disparage Rita Redberg’s editorial in the New England Journal of Medicine that questioned whether any cardiovascular imaging was indicated before low- and intermediate-risk chest pain patients left the Emergency Department.

They seem, unfortunately, to turn a blind eye to their inability to appropriately select patients for imaging, with only a 6.8% yield for significant stenosis, fewer than half of whom even progressed to a cardiac outcome.  I also take issue with the 2 patients they classified as ACS – they didn’t receive revascularization and didn’t have an MI – so what were they?  Either way, we’re looking at great expense in a cohort with only 1.4-2.4% incidence of positive cardiac outcome within 30 days.  Additionally, the comparison to TIMI and GRACE is a straw-man comparison to instruments proven to have poor predictive value in Emergency Department populations.

This is simply another trip down the quixotic zero-miss path to destruction, even going so far as to fearmonger with liability claim cost statistics.  Rather, it’s clear we’re simply doing a terrible job searching for the needle in the haystack – and the vast majority of these patients are safe for appropriate follow-up after initial Emergency Department assessment.  Rather than use this article to justify admission for CCTA, I would present this data to your patients in the context of shared decision-making and educate them regarding the high costs, abysmal yield, and poor specificity of the test used in this context.

“Cardiac risk factors and risk scores vs cardiac computed tomography angiography: a prospective cohort study for triage of ED patients with acute chest pain”
http://www.ncbi.nlm.nih.gov/pubmed/24035047

Choose Cardiac Testing Wisely

This ought come to no surprise to various proponents of Bayesian reasoning – in the expected sense that outputs of testing individuals with inappropriate pretest probabilities will be garbage.

This is a prospective evaluation of consecutive patients referred for office-based stress myocardial perfusion imaging with single-photon emission computed tomography (SPECT-MPI).  These authors evaluated referring clinician notes and patient histories and stratified 10-year coronary risk, chest pain syndromes, and categorized studies as appropriate, uncertain, and inappropriate.

Of the 1,707 patients referred for SPECT-MPI, 1,511 had complete follow up and were classifiable.  Pre-test appropriateness of referrals varied – but 47% of primary care referrals were inappropriate and 28% of cardiologist referrals were inappropriate, with inappropriate referrals for women patients exceeding men.  The kicker – abnormal MPI in the appropriate-testing group was associated with poorer outcomes, while abnormal tests in the inappropriate-testing group had no such association.

One of the basic principles of diagnostic testing is choosing patients for whom we can have faith in the results.  In our world of flawed tests, when the likelihood of a positive result is low, the balance between true positives and false positive tilts to favor misleading rather than valid diagnoses.  In a minority of instances, these tests may yet be appropriate – but, as we can see, cardiac testing under questionable circumstances provides no true prognostic value.  These are the population costs and harms of our zero-miss culture.

“The Impact of Appropriate Use on the Prognostic Value of SPECT Myocardial Perfusion Imaging”
http://www.ncbi.nlm.nih.gov/pubmed/24021779

Conflict-of-Interest Catastrophe

As if we didn’t have enough difficulty interpreting results when study design is inadequate, blinding is violated, sample sizes are underpowered, or there are differences in enrollment – we must also be worried about gross scientific misconduct.

This concerns the Jikei Heart Study and the Kyoto Heart Study – evaluations of valsartan conducted and published in 2007 and 2009, respectively.  In short, a series of investigations into the integrity of the studies revealed the following, as quoted:

“We believe, therefore, that the data were intentionally altered.”

and

“We suspect that the data were altered during their statistical analysis.”

The Lancet, in their retraction notice, note specific challenges in following up and identifying the affiliation for the study statistician, who appeared to not disclose his employment by Novartis.

When the stakes are high, the temptation to use every potential advantage to generate favorable study results is simply too great.  Jeffrey Drazen has asked us to “Believe the Data” – I think the onus is on those who generate the data to earn our trust.

“Retraction—Valsartan in a Japanese population with hypertension and other cardiovascular disease (Jikei Heart Study): a randomised, open-label, blinded endpoint morbidity-mortality study”
http://www.ncbi.nlm.nih.gov/pubmed/24012258

Local Variation in CT Use

Tell me if this sounds like your department – some folks, every time you receive sign-out, the radiologist has billed for a new BMW.  Other times, there’s an uncomfortable disuse of CT, even in the ticking time bombs of the elderly.  We all have that anecdotal feel for the variation in practice, and these authors have gone ahead and quantified it.

This is a descriptive, observational study of a single-center in Virginia evaluating individual physician CT scan use.  Overall, the department performed CT scans on 23.8% of the nearly 200,000 patient visits during the study period.  Across the 49 Emergency Physicians tracked during the study period – yes, the variation is exactly as one would expect.  The most frequent utilizers ordered CTs on nearly one third of patients, while the least frequent users only 1 in 8.

There are a couple individually coded information graphics embedded in the paper that demonstrate some extremely striking variation.  For example, as coded by chief complaint – a few physicians ordered CT scans on >60% of headache patients, while an even greater number ordered zero.  60% vs. zero!  While some practice variation is obviously acceptable in evolving practice and heterogenous patient substrate, this clearly reflects some element of underlying low-quality care.

As poorly implemented as they may be, the conception behind prior quality measures for CT use certainly has merit.  Quality measures aimed at increasing testing yield – ideally coupled with liability protections for Emergency Physicians – are likely to have increasing roles moving forward.

“Variation in use of all types of computed tomography by emergency physicians”
http://www.ncbi.nlm.nih.gov/pubmed/23998807

Tranexamic Acid for Epistaxis

Well, it’s not the major hemorrhage of CRASH-2 – but, as every Emergency Physician knows, refractory epistaxis is burdensome and significantly irritating to all involved.  Luckily, there are a variety of methods available to manage bleeding, mostly successful.

You may now add tranexamic acid to this list.  TXA, an antifibrinolytic agent, already used to reduce hemorrhage-associated coagulopathy, has been used in many different forms for minor bleeding as well.  These folks from Iran randomized, in unblinded fashion due to differences in odor, folks presenting with severe epistaxis to “conventional control” with cotton pledgets soaked in lidocaine + epinephrine versus pledgets soaked with 500mg of TXA.  Sadly, they do not declare a primary outcome – rather, the authors list several “efficacy variables” – but, whichever they would have chosen, it would have favored the TXA group.  71% of TXA patients had cessation of bleeding within 10 minutes, versus 31% with lidocaine+ epinephrine, faster discharge from the ED, less rebleeding in 24 hours, and less rebleeding at a week.

It seems physiologically plausible, in any event, considering lidocaine+epinephrine isn’t truly directly therapeutic for hemostasis.  As any Emergency Physician knows, it’s all about Plan B, C & D for every situation – and TXA seems another reasonable tool for the box.

“A new and rapid method for epistaxis treatment using injectable form of tranexamic acid topically: a randomized controlled trial”
http://www.ncbi.nlm.nih.gov/pubmed/23911102