Now It’s Fluids that Matter in Sepsis?

A few weeks ago, there was an article in the New England Journal of Medicine that dredged a retrospective data set to generate an association between timeliness different elements of a sepsis bundle and outcomes. In their analysis, antibiotics, but not fluid administration, was associated with a mortality increase. This has, at least, face validity – although, the association between timely blood cultures and serum lactate a little less so.

Now, conversely, we have another sepsis registry review attempting to tie time to fluid administration to mortality. This quality improvement registry prospectively identified patients with sepsis – and retrospectively abstracted their clinical data – between 2014 and 2016, resulting in a database of 11,182. In their analysis, mortality for patients receiving their first crystalloid within 30 minutes or within 30-120 minutes was ~18%, while mortality for patients whose fluids were initiated beyond the 120 minute limit was 24.5%.

Again, however, because these are comparisons performed on observational data, it is still subject to the slings and arrows of unmeasured confounders. Most patients whose fluid administration was started early had their care initiated in the Emergency Department – and, in clearly co-linear processes, had major elements of their care completed appropriately. This included repeat lactate measurements, antibiotics within 180 minutes of time zero, and, not only IVF within 120 minutes, but frankly, any IVF at at all. Nearly 60% of patients analyzed for their >120 minute cohort received <5 mL/kg or zero IVF in their first six hours from measurement time zero.

This is, probably, another study just cherry picking out one single feature of an entire process predicated on timely identification and treatment of sepsis. These patients did not simply have a mortality advantage because of the timeliness of IVF – it ties in to all aspects of care and attention given sepsis patients properly identified. The effect size here is probably less associated with delays just in IVF, but a comprehensive delay in diagnosis – and all its associated therapeutic misadventures.

“Patterns and Outcomes Associated With Timeliness of Initial Crystalloid Resuscitation in a Prospective Sepsis and Septic Shock Cohort”

Finding the Holes in CPOE

Our digital overlords are increasingly pervasive in medicine. In many respects, the advances of computerized provider order-entry are profoundly useful: some otherwise complex orders are facilitated, serious drug-interactions can be checked, along with a small cadre of other benefits. But, we’ve all encountered its limitations, as well.

This is a qualitative descriptive study of medication errors occurring despite the presence of CPOE. This prospective FDA-sponsored project identified 2,522 medication errors across six hospitals, 1,308 of which were related to CPOE. These errors fell into two main categories: CPOE failed to prevent the error (86.9%) and CPOE facilitated the error (13.1%).

CPOE-facilitated errors are most obvious. For example, these include instances in which an order set was out-of-date, and a non-formulary medication order resulted in delayed care for a patient; interface issues resulting in mis-clicks or misreads; or instances in which CPOE content was simply erroneous.

More interesting, however, are the “failed to prevent the error” issues – which are things like dose-checking and interaction-checking failures. The issue here is not specifically the CPOE, but that providers have become so dependent upon the CPOE to be a reliable safety mechanism that we’ve given up agency to the machine. We are bombarded by so many nonsensical alerts, we’ve begun to operate under an assumption that any order failing to anger our digital nannies must be accurate. These will undoubtedly prove to be the most challenging errors to stamp out, particularly as further cognitive processes are offloaded to automated systems.

“Computerized prescriber order entry– related patient safety reports: analysis of 2522 medication errors”

Imprecise Dosing of Liquid Medications

Many parents are overdosing their kids, study says”. Is this true? Are parents poisoning their own children, as the headline implies?

Of course not; this is not in fact a study regarding overdose incidence at all. It is, quite simply, a measurement precision study.

This study involves 2,110 parents randomly assigned to measure doses of liquid medication in various quantities using either a cup, a 0.2mL syringe, or a 0.5mL syringe. Approximately a quarter of parents were >20% off with their measurement, and 2.9% doubled the instructed dose. Taking these results as a surrogate for overdose depends on the therapeutic range for a medication – so, while the headline is not technically incorrect, the implication is an exaggeration.

With regard to measurement and dosing errors, there were a few important trends to note. Health literacy had a large influence on dosing errors – regardless of whether teaspoons or mL were used in the instructions. Then, the cup: avoid the cup when possible. Almost three-quarters of parents committed measurement or dosing errors when asked to provide a 2.5mL dose in the cup. Stick to the syringe and target round numbers (5mL) to minimize errors.

With regard to the premise of overdose – for medications with a wide therapeutic range, these data are not quite as clinically relevant. However, for high-risk medications, more time and effort should be taken to demonstrate proper dosing with parents.

“Liquid Medication Errors and Dosing Tools: A Randomized Controlled Experiment”

Missed a Stroke? You’re Not Alone

It’s easy to fall prey to the quality assurance shaming associated with your hospital’s stroke team.  It’s nearly impossible to find the right balance between over-triage of any remotely neurologic complaint, and getting the inevitable nastygram follow-ups resulting from unexpected downstream stroke diagnoses.

Take heart: it’s not just you.

This retrospective review of evaluated patients discharged with a diagnosis of acute stroke at two hospitals – one an academic teaching institution, and one a non-teaching community hospital.  All patients discharged with such a diagnosis were reviewed manually by a neurologist, and charts were analyzed specifically to quantify the frequency with which an Emergency Physician did not initially document acute stroke as a possible diagnosis, or a consultant neurologist did not make a timely diagnosis of stroke when asked.

Out of 465 patients included in their one-year review period, 103 of strokes were missed – 22% of those at the academic institution and 26% of those at the community hospital.  And, again, take heart – 20 of 55 patients missed at the academic institution were neurology consults for acute stroke, but were initially misdiagnosed by our neurology consultants, as well.  Posterior strokes were twice as likely to be missed as anterior strokes, and symptoms such as dizziness and nausea and vomiting were more frequent in missed presentations.  Focal weakness, neglect, gaze preference, and vision changes were less frequently missed.

Entertainingly, these authors are mostly verklempt over the fact half of missed stroke diagnoses presented within time windows for tPA or endovascular intervention – although, no other accounting of potential eligibility is presented other than timeliness.

“Missed Ischemic Stroke Diagnosis in the Emergency Department by Emergency Medicine and Neurology Services”

The Anecdotal Value of the Physical Exam

In the era of laboratory testing and imaging reliance, the physical examination is often neglected.  And, indeed, for many suspected diagnoses, the physical examination adds little – the positive or negative likelihood ratios associated with specific findings are not sufficient for ruling-in or ruling-out disease.

However, as this study describes, there is at least occasional value in performing a physical examination.  This is simply an e-mailed survey to five thousand clinicians asking for a vignette regarding a delay in diagnosis relating to a missed physical examination finding.  There were 208 responses to the survey meeting inclusion criteria, and, in general, the joy in this article is in the Supplementary Table 1, which includes such gems as:

  • Missed pregnancy with twins before hysterectomy
  • Missed clavicle fracture, labeled “rule out myocardial infarction”
  • Missed previous appendectomy scar and made diagnosis of appendicitis again
  • Missed giant ovarian cyst, labeled as ascites
  • Missed gunshot entrance wound in emergency room

This general canvassing survey provides no information regarding the frequency of such misses, and some of the other 208 responses are not quite as straightforward.  The authors do subjectively note a pattern to some of the responses and suggest:

  • Acutely ill or painful patients should be fully exposed
  • Genital and rectal exams should not be omitted when relevant
  • Don’t forget shingles

They also note the physical examination is a “low-cost procedure”, which is, in part, true.  It is certainly less expensive than most laboratory or imaging procedures.  The scope of the exam dictates a time-cost of a limited physician resource, however, and even a couple extra minutes per patient could result in dramatic decreases in efficiency.  The authors here, while focusing on the “misses”, do not mention the possibility of false-positive findings potentially noticed on a less-focused examination, and the potential downstream resource costs associated with investigation of normal variants.

Future research could provide a better accounting of the true incidence of preventable diagnostic error associated with physical examination deficiencies – and the complex factors predicting the appropriate scope of examination in different settings.

“Inadequacies of Physical Examination as a Cause of Medical Errors and Adverse Events: A Collection of Vignettes”

More Futile “Quality”, vis-à-vis, Alert Fatigue

The electronic health record can be a wonderful tool.  As a single application for orders, results review, and integrated documentation storehouse, it holds massive potential.

Unfortunately, much of the currently realized potential is that of unintended harms and inefficiencies.

Even the most seemingly innocuous of checks – those meant to ensure safe medication ordering – have gone rogue, and no one seems capable of restraining them.  These authors report on the real-world effectiveness of adverse drug alerts related to opiates.  These were not public health-related educational interventions, but, simply, duplicate therapy, drug allergy, drug interaction, and pregnancy/lactation safety alerts.  These commonly used medications frequently generate medication safety alerts, and are reasonable targets for study in the Emergency Department.

In just a 4-month study period, these authors retrospectively identified 826 patients for whom an opiate-related medication safety alert was triggered, and these 4,742 alerts constituted the cohort for analysis.  Of these insightful, timely, and important contextual interruptions, these orders were overridden 96.3% of the time.  And, if only physicians had listened, these overridden alerts would have prevented: zero adverse drug events.

In fact, all 8 opiate-related adverse drug events could not have been prevented by alerts – most of which were itching, anyway.  The authors do attribute 38 potentially prevented adverse drug events to the 3.7% of accepted alerts – although, again, these would probably mostly just have been itching.

Thousands of alerts.  A handful of serious events not preventable.  A few episodes of itching averted.  This is the “quality” universe we live in – one in which these alerts paradoxically make our patients less safe due to sheer volume and the phenomenon of “alert fatigue”.

“Clinically Inconsequential Alerts: The Characteristics of Opioid Drug Alerts and Their Utility in Preventing Adverse Drug Events in the Emergency Department”

Beaten Into Submission By Wrong-Patient Alerts

It’s a classic line: “Doctor, did you mean to order Geodon for room 12?  They’re here for urinary issues.”

And, the rolling of eyes, the harried return to the electronic health record – to cancel an order, re-order on the correct patient, and return to the business at hand.

Unfortunately, the human checks in the system don’t always catch these wrong-patient errors, leading to potential serious harms.  As such, this handful of folks decided to test an intervention intended to reduce wrong-patient orders: a built-in system delay.  For every order, a confirmation screen is generated with contextual patient information.  The innovation in this case, is the alert cannot be dismissed until a 2.5 second timer completes.  The theory being, this extra, mandatory wait time will give the ordering clinician a chance to realize their error and cancel out.

Based on a before-and-after design, and observation of 3,457,342 electronic orders across 5 EDs, implementation of this confirmation screen reduced apparent wrong-patient orders from approximately 2 per 1,000 orders to 1.5 per 1,000.  With an average of 30 order-entry sessions per 12-hour shift in these EDs, this patient verification alert had a measured average impact of a mere 2.1 minutes of time.

Which doesn’t sound like much – until it accumulates across all EDs and patient encounters, and, in just the 4 month study period, this system occupied 562 hours of extra time.  This works out to 70 days of extra physician time in these five EDs.  As Robert Wears then beautifully estimates in his editorial, if this alert were implemented nationwide, it would result in 900,000 additional hours of physician time per year – just staring numbly at an alert to verify the correct patient.

It is fairly clear this demonstration is a suboptimal solution to the problem.  While this alert certainly reduces wrong-patient orders of a measurable magnitude, the number of adverse events avoided is much, much smaller.  However, in the absence of an ideal solution, such alternatives as this tend to take root.  As you imagine and experience the various alerts creeping into the system from every angle, it seems inevitably clear:  we will ultimately spend our entire day just negotiating with the EHR, with zero time remaining for clinical care.

“Intercepting Wrong-Patient Orders in a Computerized Provider Order Entry System”

“‘Just a Few Seconds of Your Time.’ at Least 130 Million Times a Year”

Doctor Internet Will Misdiagnose You Now

Technology has insidiously infiltrated all manner of industry.  Many tasks, originally accomplished by humans, have been replaced by computers and robots.  All manner of industrialization is now automated, Deep Blue wins at chess, and Watson wins at Jeopardy!

But, don’t rely on Internet symptom checkers to replace your regular physician.

These authors evaluated 23 different online symptom checkers, ranging from the British National Health Service Symptom Checker to privately owned reference sites such as WebMD, with a variety of underlying methodologies.  The authors fed each symptom checker 45 different standardized patient vignettes, ranging in illness severity from pulmonary embolism to otitis media.  The study evaluated twin goals: are the diagnoses generated accurate?  And, do the tools triage patients to the correct venue for medical care?


For symptom checkers providing a diagnosis, the correct diagnosis was provided 34% of the time.  This seems pretty decent – until you go further into the data and note these tools left the correct diagnosis completely off the list another 42% of the time.  Most tools providing triage information performed well at referring emergent cases to high levels of care, with 80% sensitivity.  However, this performance was earned by simply referring the bulk of all cases for emergency evaluation, with 45% of non-emergent and 67% of self-care cases being referred to inappropriate levels of medical care.

Of course, this does not evaluate the performance of these online checkers versus telephone advice lines, or even against primary care physicians given the same limited information.  Before being too quick to tout these results as particularly damning, they should be evaluated in the context of their intended purpose.  Unfortunately, due to their general accessibility and typical over-triage, they are likely driving patients to seek higher levels of care than necessary.

“Evaluation of symptom checkers for self diagnosis and triage: audit study”

Expunging “Zero-Miss” from Chest Pain Evaluation

The admit rate for chest pain from the Emergency Department varies widely.  In some instances, the rule “chest pain = admit” is the norm – or, at the least, observation and provocative or anatomic radiology from the Emergency Department.  Indeed, such studies exhorting the advantages of CCTA in the ED included those aged as low as 30 years – patients in whom the false positives from testing far outweigh the true.

The typical motivating factor for such aggressive admission rates has been a culture of “zero miss”, motivated by huge settlements for missed MI.  Accordingly, this brief study followed Emergency Physicians and asked – what if there were no legal liability?  What if there was an acceptable miss rate of 1 or 2% in chest pain?  How many of these people would be discharged instead of admitted?

Based on 259 surveys completed regarding a convenience sample of admitted chest pain patients, the answer from this single-center study is: 30%.

With over 5 million ED visits for chest pain annually, cutting the current 35% admission rate by 30% turns into a massive reduction in resource utilization.  And, frankly, it’s not as daunting to implement such thresholds as one might imagine: ED physicians set the standard of care, not lawyers.  As Jeff Kline has alluded to the possibility, it’s time for domain experts to set reasonable practice variation and resource utilization, rather than leave it up to lawyers and their hired guns:

Some argue that “standard of care” is only determined by a jury. I disagree. Physician topic experts should write the standard of care.

— jeffrey kline (@klinelab) July 12, 2015

This definitely should be done.

“The Association Between Medicolegal and Professional Concerns and Chest Pain Admission Rates”

Unreliable Information About Drug Use? In the ED? Never!

This simple observational series illuminates the likely truths behind our anecdotal experience – patients are either clueless or deliberately misleading regarding their ingestion of foreign substances.

For the purposes of this investigation, “drug use” is not restricted to illicit substances – these authors also explored the reliability of reporting of prescription medications.  In this prospective, year-long enrollment, 55 patients were selected randomly from a larger cohort to have a urine sample submitted for liquid chromatography/mass spectrometry.  The LC/MS assay utilized was capable of detecting 142 prescriptions, over-the-counter drugs, and drugs of abuse and their metabolites.  A drug whose level of detection was lower than 2 half-lives was reported as discordant with patient self-reporting.

All told, 17 out of 55 patients provided accurate medication histories, based on those detected on LC/MS.  Over half the patients under-reported – including a patient with 7 unreported drugs detected – while 29% over-reported a drug not subsequently detected.  Interestingly, illicit drugs were the least likely to be mis-reported, although, that may simply be a reflection of the higher prevalence of prescription and OTC medications.

Such observations are limited by the accuracy of the assay utilized, which has not been validated.  However, it ought come as no surprise many patients either intentionally or unintentionally misrepresent all possible drug exposures.  While not all omissions are clinically relevant, certainly, non-compliance and misinformation may have important implications for diagnosis and treatment.

“The Accuracy of Self-Reported Drug Ingestion Histories in Emergency Department Patients”