While it is not surprising, it is worrisome that doctors have trouble with statistics, particularly conditional probabilities. 25 German doctors were asked about the following situation. It is clearly a tricky question, but it is surely a type of question that doctors are exposed to constantly:

The probability that one of these women has breast cancer is 0.8 percent. If a woman has breast cancer, the probability is 90 percent that she will have a positive mammogram. If a woman does not have breast cancer, the probability is 7 percent that she will still have a positive mammogram. Imagine a woman who has a positive mammogram. What is the probability that she actually has breast cancer?

The results of this small trial were not encouraging:

[The] estimates whipsawed from 1 percent to 90 percent. Eight of them thought the chances were 10 percent or less, 8 more said 90 percent, and the remaining 8 guessed somewhere between 50 and 80 percent. Imagine how upsetting it would be as a patient to hear such divergent opinions.

As for the American doctors, 95 out of 100 estimated the woman’s probability of having breast cancer to be somewhere around 75 percent.

The right answer is 9 percent.

You would think that this sort of quantitative analysis would play an important role in the medical profession. I am certain that a great many people around the world have received inappropriate treatment or taken unnecessary risks because doctors have failed to properly apply Bayes’ Theorem. Indeed, false positives in medical tests are a very commonly used example of where medical statistics can be confusing. It is also a problem for biometric security protocols, useful for filtering spam email, and a common source of general statistical errors.

The proper remedy for this is probably to provide doctors with simple-to-use tools that allow them to go from data of the kind in the original question to a correct analysis of probabilities. The first linked article also provides a good example of a more intuitive way to think about conditional probabilities.

{ 5 comments… read them below or add one }

If the doctors weren’t given a few minutes, a pencil, notepad, and preferably a calculator then I’m not surprised they got it wrong. Of course it may be that after all those years of scrawling prescriptions the doctors are unable to read their own handwriting, in which case the pencil & notepad wouldn’t be much help.

We’re looking for the probability of cancer, given positive mammogram:

probability you have cancer, given that you have a positive mamogram = the probability that you will have a positive mamogram if you have cancer, times the probability you have cancer, divided by your overall probability of having a positive mamogram

P(cancer|positive_mamogram) = P(positive_mamogram|cancer) * P(cancer) / P(positive_mamogram)

First, we need to calculate the probability of a positive mamogram. For women without cancer, that is 7%. For women with cancer, it is 90%. 0.8% of women have cancer. Therefore, the overall probability of a positive mamogram is [ (0.8)(90) + (99.2)(7) ] / 100, or 7.66%.

Now, we can calculate the conditional probability

= (90) * (0.8) / (7.66)

= 9.4%

How many practicing physicians do you think would work that out, before talking to a patient about their test results?

I stopped taking maths at 16, but I sat down and worked it out in a couple of minutes using the natural frequencies approach, so I can’t see in principle why they wouldn’t. Remember that there’s no reason a doctor would need to make a calculation like this in front of a patient – they just need to be capable of reading a research paper and translating the figures in that paper into information they could later use to make judgments or provide to patients. I think it’s a more valid criticism of doctors to say, as one doctor I met has said, that they very rarely use a microscope any more & thus make patients wait for lab test results in cases where the GP could have made a diagnosis and treated it much more quickly.

I don’t imagine that this number would change much. I would think it should just be a piece of information that doctors memorize… no mathematical ability required.

How does the healthcare industry compare to engineering and aeronautics when it comes to dealing with human error?

Not favorably. Much of my background is in what’s called high-reliability industries—the ones that operate under conditions of high hazard yet seldom have a bad event—and people in those fields tend to have a systems perspective. We’re not terribly interested in what some individual did. We want to know what led up to a bad event and what changes we need to make to reduce the likelihood of that event ever happening again.

When I got into healthcare, I felt like I’d stepped into an entirely different world. It was all about, “Let’s figure out who screwed up and blame them and punish them and explain to them why they’re stupid.” To me, it’s almost like whistling past the grave. When we demonize the person associated with a bad event, it makes us feel better. It’s like saying, “We’re not stupid so it won’t happen to us.” Whereas in fact it could happen to us tomorrow.

Why do you think healthcare lags so far behind in this respect?

For one thing, in healthcare there’s tons of variation, in both biology and behavior, so physicians are rightly skeptical of the cookie-cutter approach. They think you have to tailor everything to the individual. There’s some truth to that, but the tailoring should be based on what helps the patient, not on your own personal preference.

And then, too, medicine is much older than these other fields, eons old, and for most of that time there wasn’t PubMed or the AMA or what have you. It was all about the expertise of the individual practitioner. It’s a short step from there to assuming that problems in medicine stem from problematic individuals. That’s why we have this whole “train and blame” mentality in medical culture; someone makes a mistake, you train them not to do it anymore, and then you punish them if it happens again. I think we’ve ridden that horse about as far as we can.

That suggests that the biggest obstacle to reducing medical error is medical culture, rather than our understanding of the human body or the quality of the available technologies and treatments.

It’s all those things, but first and foremost, yes, it’s cultural. But I should say before we go any further that I don’t usually use the term “error.” For starters, it distracts people from the real goal, which isn’t reducing error but reducing harm. And it also feeds into precisely the cultural problem we’re discussing. It has a punitive feel, and it suggests that the right answer was available at the time, which isn’t always the case.