Unreliable Information About Drug Use? In the ED? Never!

This simple observational series illuminates the likely truths behind our anecdotal experience – patients are either clueless or deliberately misleading regarding their ingestion of foreign substances.

For the purposes of this investigation, “drug use” is not restricted to illicit substances – these authors also explored the reliability of reporting of prescription medications.  In this prospective, year-long enrollment, 55 patients were selected randomly from a larger cohort to have a urine sample submitted for liquid chromatography/mass spectrometry.  The LC/MS assay utilized was capable of detecting 142 prescriptions, over-the-counter drugs, and drugs of abuse and their metabolites.  A drug whose level of detection was lower than 2 half-lives was reported as discordant with patient self-reporting.

All told, 17 out of 55 patients provided accurate medication histories, based on those detected on LC/MS.  Over half the patients under-reported – including a patient with 7 unreported drugs detected – while 29% over-reported a drug not subsequently detected.  Interestingly, illicit drugs were the least likely to be mis-reported, although, that may simply be a reflection of the higher prevalence of prescription and OTC medications.

Such observations are limited by the accuracy of the assay utilized, which has not been validated.  However, it ought come as no surprise many patients either intentionally or unintentionally misrepresent all possible drug exposures.  While not all omissions are clinically relevant, certainly, non-compliance and misinformation may have important implications for diagnosis and treatment.

“The Accuracy of Self-Reported Drug Ingestion Histories in Emergency Department Patients”
http://www.ncbi.nlm.nih.gov/pubmed/25052325

Bayesian Statistics: We’re Dumb as Rocks

A guest post by Justin Mazzillo, a community doc in New Hampshire.

Physicians are often required to interpret medical literature to make critical decisions on patient care. Given that it is often in a hectic and hurried environment a strong foundation of evidence-based medicine is paramount. Unfortunately, this study from JAMA showed that physicians at all levels of training have anything but that.

This group surveyed a mix of medical students, interns, residents, fellows, attending physicians and one retired physician. They were asked to answer the following question:

“If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person’s symptoms or signs?”

Unfortunately three-quarters of the subjects got the answer wrong. The results were consistent across all levels of training. The most commonly given answer was almost as far from correct as possible.

I’ve withheld the answer for those who want to try out the questions themselves, and I know all the dedicated EMLoN readers will fare much better.

“Medicine’s Uncomfortable Relationship With Math: Calculating Positive Predictive Value”
http://archinte.jamanetwork.com/article.aspx?articleid=1861033

[I gave this same test to our residency program – and the results were almost identical.  A few sample “answers” below. -Ryan]


Are a Third of Research Conclusions Wrong?

As I covered last year, half of what you’ve been taught in medicine is wrong – we just don’t know which half.

And, it turns out, sometimes even the same authors taking a second look at the same data as before, can come up with new – and wildly different – conclusions.

This is a review of 37 randomized-controlled trials published after 1966 paired with 37 “re-analyses” of the same data.  These trials span the entire medical domain, from mycophenolate therapy after cardiac transplantation to homeopathy for fibrosis.  Of these 37 re-analyses, 32 of them involved authors from the original research group.  These re-analyses differed by changing statistical techniques, outcome definitions, or other study interpretation methods.

Following re-analysis, 13 (35%) changed the original conclusions – either suggesting more, fewer, or even entirely different patients should be treated.  The implication regarding the reliability of our evidentiary basis for medical practice is obviously profound – if even just the original authors and data can result in conflicting conclusions.

In his editorial, Harlan Krumholz argues the solution is clear: open the data.  Independent verification of findings – whether by erasing bias or undesired mathematical pathology – is critical to ensuring the most complete understanding of the evidence base.  If our highest duty is to our patients, we must break down the barriers created by self-interest and institutional policies in order to promote data sharing – and serve patients by improving the clarity and transparency of medical practice.

“Reanalyses of Randomized Clinical Trial Data”
http://www.ncbi.nlm.nih.gov/pubmed/25203082

How Electronic Health Records Sabotage Care

Our new information overlords bring many benefits to patient care.  No, really, they do.  I’m sure you can come up with one or two aspects of patient  safety improved by modern health information technology.  However, it’s been difficult to demonstrate benefits associated with electronic health records in terms of patient-oriented outcomes because, as we are all well aware, many EHRs inadvertently detract from efficient processes of care.

However, while we intuitively recognize the failings of EHRs, there is still work to be done in cataloguing these errors.  To that end, this study is a review of 100 consecutive closed patient safety investigations in the Veterans Health Administration relating to information technology.  The authors reviewed each case narrative in detail, and divided the errors up into sociotechnical classification of EHR implementation and use.  Unsurprisingly, the most common failures of EHRs are related to failures to provide the correct information in the correct context.  Following that, again, unsurprisingly, were simple software malfunctions and misbehaviors.  Full accounting and examples are provided in Table 2:

Yes, EHRs – the solution to, and cause of, all our problems.

“An analysis of electronic health record-related patient safety concerns”
http://jamia.bmj.com/content/early/2014/05/20/amiajnl-2013-002578.full

The Whole Truth, and Anything But

The publication of clinical trials in high-impact journals represents one of the most effective forms of knowledge translation for new medical evidence.  Three of these journals – JAMA, the New England Journal, and the Lancet – perennially rank among the highest-impact.  As I’ve mentioned before, these journals have a higher responsibility to society at large to maintain scientific integrity, as most readers therefore accept the authors presented results and conclusions at face-value.

However, clinical trials are also required to report their results on ClinicalTrials.gov.  These authors review one year’s worth of clinical trials published in the three aforementioned journals and compared the high-impact results with those stashed away on ClinicalTrials.gov.  Of 91 trials identified, 156 primary and co-primary endpoints were identified, but only 132 were described in both sources – and only 61% were concordant between each source.  Of 2,089 secondary endpoints, 619 were described in both sources – and were only 55% concordant.

Furthermore, the authors identified six studies with primary outcomes noted on ClinicalTrials.gov resulting in alternative trial interpretation.  These included changes in disease resolution or progression time, as well as results that achieved statistical significance in publication, but not on ClinicalTrials.gov.

The authors conclude:

“…possible explanations include reporting and typographical errors as well as changes made during the course of the peer review process …. journal space limitations and intentional dissemination of more favorable end points and results in publications.”

We ought to expect better vetting of results by journal editors – particularly from sources frequently followed by the lay media.

“Reporting of Results in ClinicalTrials.gov and High-Impact Journals”
http://www.ncbi.nlm.nih.gov/pubmed/24618969

The “Standard of Care”

A guest post by William Paolo (@paolomd1), the Program Director of Emergency Medicine at SUNY Upstate.
“Standard of care” is a legal term whose colloquial medical usage, outside of tort law, has been unfortunately adopted by the medical infrastructure into its cultural lexicon.  The implications of its usage, when related amongst physicians, is the suggestion that there is an accepted, established, and parsimonious rendering of medical care that all reasonable providers would, under similar circumstances, judiciously employ.  It serves as an idealistic touchstone resting upon the foundations of summated evidence via which clinicians measure their individual and collective performances.  Actions that deviate from the collective wisdom are deemed inappropriate, negligent, and worthy of derision for failing to practice within the established evidentiary parameters of the authoritative collective guild.  Undermining this concept are the radical disparities of an agreed upon standard among clinical specialists and varying geographical norms that disrupt the foundations of a standardized standard of care.   The very term itself is normative, proposing what ought to be rather than what currently is, based upon a leap of logic that has never been fully supported by medical empiricism as expressed within the evidentiary literature.  The standard therefore may be determined by the collective, but more often it is determined by a scant few individuals utilizing the argument from authority to prescribe practice patterns.  The difficulty lies in prospectively determining what current “standard of care” actually results in patient harm, as the medical story is replete with examples of injury obvious only in retrospect.
The PROWESS study was released in 2001 in which activated protein C as manufactured and distributed by Eli-Lilly under the name Xigris was evaluated for the treatment of severe sepsis.   1690 people with septic shock requiring vasopressors were randomized to receive either activated protein-C or placebo.  The primary end point was death from any cause 28 days after infusion. Because of the results the phase 3 trial was stopped early having demonstrated an absolute mortality reduction of 6% yielding a number needed to treat of 17.   As is now widely known there were multiple issues with the original study and the subsequent 2012 PROWESS-Shock study demonstrated no benefit and potential harms of Xigris.  In 2014 it is easy to appreciate the issues of harm and need for reproduction and verification of PROWESS to overcome equipoise however physicians in 2001 had a very well done study (as most industry supported research is—though it is also done well to bias in their favor) that was stopped early due to patient benefit.  One could not fault a 2001 physician for referring to activated protein C as the new standard of care for sepsis—or can we?
Standard of care forces physicians to adopt an intellectually closed approach to evidence presuming that science has settled particular questions regarding clinical conundrums.  Retrospectively the foolishness of this position is obvious as the inexorable progress of empiricism wrought through experimentation recurrently dismantles accepted evidentiary norms.  The “standard” of current epochal standard care has no more underlying claim to absolute truth-value than previous erroneous medical misadventures exemplified by the various theories of humorism.  The problem, as it were, is one of perspective as it is difficult discern objective truths when temporally related to the perpetuation of often faulty ideas and attitudes.  Only the march forward of time and accumulated wisdom is able to dismantle that which seemed once intuitively and evidentially obvious in a given medical period.  The reasonable intellectual position to therefore adopt, as a profession, is one of radical agnosticism towards absolute truth claims and delineations of care as defined by standards.  This is not to say that we should fall into nihilism and presume that all of our current care will one day be proven mistaken and therefore be paralyzed by the knowledge of transformation.   The story of medical science, as all of science, is replete with advancements and misadventures with a clear arrow of progression. “Standard of care” adopts a position of unsupported truth-value without the reason necessary for its nuanced interpretation.  Though we may continue to utilize it as a profession it would be preferable to hand it, in its entirety, back to the lawyers who endowed us with it at the beginning.

“Efficacy and safety of recombinant human activated protein C for severe sepsis.”
http://www.ncbi.nlm.nih.gov/pubmed/11236773

Will Twitter Ruin Your Diagnostic Abilities?

Medical errors, by some estimates, are associated with cognitive biases up to 75% of the time.  Given the oft-quoted 98,000 deaths per year as a result of medical error, recognition of these biases seems prudent.  Knowing is, after all, half the battle.

One of these is “availability bias”, the tendency to conflate the likelihood of disease depending on whether the details are readily present in memory.  Essentially, if you don’t think of it – you’ll never diagnose it – but if you think of it too frequently, you might test or treat for it with greater frequency than appropriate.

These authors subjected 38 internal medicine residents to a simulation where they read Wikipedia entries on two diseases.  Six hours later, they were asked to review and submit diagnoses for eight cases – two of which superficially resembled the disease descriptions from Wikipedia.  Finally, the residents were asked to use a structured methodology evaluating signs and symptoms in order to systematically create and winnow a list of potential diagnoses.

I’ve probably already clued you into the end result – but, basically, in the initial case review, residents had a 56% correct diagnosis rate for the “availability bias” cases and a 70% correct diagnosis rate for the others.  Then, by simply re-reading the cases in a systematic fashion, they subsequently were able to bring their rate of correct diagnosis up to 71% on the bias cases.

So, the next time you discover something novel and interesting on Twitter – try not to take it with you to work unchecked ….

“Exposure to Media Information About a Disease Can Cause Doctors to Misdiagnose Similar-Looking Clinical Cases”
http://www.ncbi.nlm.nih.gov/pubmed/24362387

Irreproducible Research & p-Values

I circulated this write-up from Nature last week on Twitter, but was again reminded by Axel Ellrodt of its importance when initiating research – particularly in the context of studies endlessly repeated, or slightly altered, until positive (read: Big Pharma).

The gist of this write-up – other than the fact the p-value was never really intended to be a test of scientific validity and significance – is the important notion a universal cut-off of 0.05 is inappropriate.  Essentially, if you’re familiar with Bayes Theorum and the foundation of evidence-based medicine, you understand the concept that – if a disease is highly unlikely, the rule-in test ought to be tremendously accurate.  Likewise, if a scientific hypothesis is unlikely, or an estimated effect size for a treatment is small, the p-value required to confirm a positive result needs to be greater than the one-in-twenty approximation of the traditional 0.05 cut-off.  The p-value, then, functions akin to the likelihood ratio – providing not a true dichotomous positive/negative outcome, but simply further adjusting the chance a result can be reliably replicated.

This ties back into the original purpose of p-value in research – to initially identify topics worth further investigation, not to conclusively confirm true effects.  This leads very obviously into the recurrent phenomenon through clinical and basic science research of irreproducible research.  And, indeed, Nature’s entire segment of the challenges of reproducibility in research is an excellent read for any developing investigator.

“Scientific method: Statistical errors”
http://www.nature.com/news/scientific-method-statistical-errors-1.14700

“Challenges in Irreproducible Research”
http://www.nature.com/nature/focus/reproducibility/index.html

I’ve Got the [Wrong] Answer!

Most of us think we have fair insight into our own medical decision-making.  When presented with a difficult case, I think most would presume to present a provisional diagnosis with a decreased level of confidence.

Apparently, nope.

This fascinating insight into decision-making comes from a set of clinical case vignettes distributed to physician volunteers.  118 physicians were recruited via e-mail to complete four structured case presentations – two “easy”, two “difficult”.  Physicians were not specifically notified regarding the variable difficulty of the cases involved.  They were provided first the history, then the exam, followed by results of general and specific testing, if requested.  During each stage of the process, physicians were asked to provide preliminary diagnoses and their level of confidence.

For the two easy cases, the mean confidence level of respondents was a little over 70%.  And, final diagnostic accuracy was a little under 50%.  For the difficult cases, the mean confidence level of respondents was about 65%.  And diagnostic accuracy was … 5%.  Almost as confident, almost never right.

Physician characteristics provided few insights regarding behavior, confidence, and accuracy.  Increasing years of experience were related to decreased testing and consultation requests – but, for the most part, the only insight:  physicians are lacking in insight.

“Physicians’ Diagnostic Accuracy, Confidence, and Resource Requests”
http://www.ncbi.nlm.nih.gov/pubmed/23979070

Stroke or Stroke Mimic? Who Cares!

Suppose you’re “lucky” enough to be taken to an experienced stroke center if you have stroke-like symptoms.  After all, they see strokes every day, are experts in the diagnosis of stroke, and have given thousands of patients thrombolytics.  However, how often might they be wrong, you ask?

Oh, they estimate about 1 in every 50.  But, truthfully, it’s probably much worse.

This is a multi-center observational cohort that purports to identify the percentage of patients treated with tPA and subsequently diagnosed with stroke mimics.  Out of the 5518 patients in their cohort, 100 were identified as stroke mimics.  Two of the 100 had sICH by NINDS criteria, but none died.  Therefore, these authors confirm, tPA is safe even when they’re wrong, and the collateral damage of racing to tPA is low.

Of course, their methodology for identifying a stroke mimic is hugely skewed towards maintaining the diagnosis of ischemic stroke.  Only patients in whom clinical details did not suggest a vascular etiology or a clear alternative diagnosis were labeled mimics.  Patients with nonspecific features, non-contradictory imaging, or lacking definite evidence favoring stroke mimic remained as diagnoses of acute stroke.

So, even at experienced stroke research institutions – 1 in 50 with the most generous of criteria.  What’s the chance real-world performance approaches anything close to this level of diagnostic skill?

The authors, of course, declare multiple financial conflicts of interest with the manufacturer of tPA.

Safety of Thrombolysis in Stroke Mimics : Results From a Multicenter Cohort Study”
www.ncbi.nlm.nih.gov/pubmed/23444310