Bayesian Statistics: We’re Dumb as Rocks

A guest post by Justin Mazzillo, a community doc in New Hampshire.

Physicians are often required to interpret medical literature to make critical decisions on patient care. Given that it is often in a hectic and hurried environment a strong foundation of evidence-based medicine is paramount. Unfortunately, this study from JAMA showed that physicians at all levels of training have anything but that.

This group surveyed a mix of medical students, interns, residents, fellows, attending physicians and one retired physician. They were asked to answer the following question:

“If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person’s symptoms or signs?”

Unfortunately three-quarters of the subjects got the answer wrong. The results were consistent across all levels of training. The most commonly given answer was almost as far from correct as possible.

I’ve withheld the answer for those who want to try out the questions themselves, and I know all the dedicated EMLoN readers will fare much better.

“Medicine’s Uncomfortable Relationship With Math: Calculating Positive Predictive Value”
http://archinte.jamanetwork.com/article.aspx?articleid=1861033

[I gave this same test to our residency program – and the results were almost identical.  A few sample “answers” below. -Ryan]


One thought on “Bayesian Statistics: We’re Dumb as Rocks”

  1. Thanks for posting this. Definitely thought provoking.

    One must be fair to those who were asked to perform the exercise. As the article states, they did not give any information about the sensitivity of the test. When developing the correct answer to THEIR question, they ASSUMED a sensitivity of 100%, but they did not tell those answering the question to make assumptions. If the ASSUMED sensitivity was cut in half, the answer would be half of what they reported as to be "correct".

    While it would be reasonable to point to the group of responders that were "way off" and state that they demonstrated a lack of understanding of the statistics in question, it is also reasonable to consider that the subjects (and the residents in the example from above) were forced by an authority figure/ topic expert to solve a problem without all the necessary facts, causing them to falsely infer how the data should be used. (i.e. they were "tricked" or "had their mind messed with")

    I'd like to see this exercise repeated with all the necessary information provided to the subjects, rather than EXPECTING the subjects to make assumptions about missing data (and not explicitly telling them to make assumptions).

    Thanks again for the thought provoking post!

    Chris Zammit, MD
    Asst Professor of EM and Neurology
    U Cincinnati

Comments are closed.