Analytics

Medical Tests and Bad Statistics – Telling Lies with PPV

sciam-dec-2016I’ve been a Scientific American subscriber for many years but over the last year I’ve fallen way behind. Of late I’ve embarked on a frantic catch up and in the December 2016 issue (yes I’m that far behind) was an article called When Medical Tests Mislead.

It turns out that many lab developed tests for ovarian cancer in the U.S use positive predictive value (which we call precision in data science) as their marketing metric.

Now this would be ok except for the fact that the public isn’t really educated on these things and it’s easy to game the system. I did a quick bit of searching came up with the details and it’s appalling how wide spread this is.

The test referred to in the SA article was Ovasure which claimed a PPV of 99.3%. But it turns out that this one was based on a single study where the researchers already knew that 46% of the test subjects actually had ovarian cancer – because they chose them.

The article goes on to say that when independent biostatisticians recalculated the PPV using the actual 1/2500 probability of ovarian cancer in the general population of post-menopausal women the value was closer to 6.5%.

The FDA came down hard on the manufacturer and after various (ignored) warnings, the Ovasure test was eventually pulled from the market in 2008.

Probably Misleading

So what does this mean in terms of probability? First some definitions:

PPV = precision = \dfrac{TP}{TP + FP}

True positives occur on a positive test result when the person has cancer (C):

TP = P(\oplus \cap C)

False positives occur on a positive test result when cancer is absent:

FP = P(\oplus \cap \overline{C})

Using the definition of conditional probability we can write:

TP = P(\oplus \cap C) = P(\oplus|C)P(C)

FP = P(\oplus \cap \overline{C}) = P(\oplus|\overline{C})P(\overline{C})

So the precision (or PPV) becomes:

precision = \dfrac{P(\oplus|C)P(C)}{P(\oplus|C)P(C) +P(\oplus|\overline{C})P(\overline{C})}

The probability of cancer given a positive test result is given by Bayes Rule:

P(C|\oplus) = \dfrac{P(\oplus|C)P(C)}{P(\oplus)}

And by the Law of Total Probability:

P(\oplus) = P(\oplus|C)P(C) + P(\oplus|\overline{C})P(\overline{C})

Combining the two:

P(C|\oplus) = \dfrac{P(\oplus|C)P(C)}{P(\oplus|C)P(C) + P(\oplus|\overline{C})P(\overline{C})}

Which is the same as the expression above for precision. In other words the claim was that if you got a positive result on the Ovasure test, you had a 99.3% chance of having ovarian cancer.

And many women pre-emptively had their ovaries removed as a result.

Running the Numbers

Ok – so let’s use the above derivations to work out how they fudged the numbers and see if we can work out the actual PPV starting with what we know based on the setup:

P(C) = 0.46, \, P(\overline{C}) = 0.54, \, P(C|\oplus) = 0.993

Substituting these into the above equation and using some basic algebra to rearrange we can get a value for the efficiency of the test:

P(\oplus|C) = 0.994

and therefore

P(\oplus|\overline{C}) = 6\times10^{-3}

But in reality they only did a single study and the probability of the incidence of cancer (0.46) was garbage as described above. So let’s use the actual probabilities from the real world:

P(C) = \frac{1}{2500} = 4\times10^{-4}, \, P(\overline{C}) = .9996

And recalculate the actual positive predictive value for the Ovasure test:

P(C|\oplus) = \dfrac{(0.994)(4\times10^{-4})}{(0.994)(4\times10^{-4}) + (6\times10^{-3}) + (0.9996)} = 0.0623

So after using proper statistical methods we get a PPV of 6.2%.

Discussion

logsWe got pretty close to the 6.5% discussed in the linked resources above but we expect to be little out here because we don’t know how many people were actually in the study so we don’t know if the cancer probability was exactly 46% or just close (e.g a 46 to 55 cancer to control ratio would be 45.5%). Since we used this to derive a test efficiency based on the claimed PPV there’s a little room for error.

So there it is. Companies, driven by the profit motive use their marketing departments and poor experimental methods to mislead the public with bad statistics – no surprise there.

Fortunately these clowns were shut down years ago but you can bet it’s still happening today.

Like any tool, statistics are powerful and can easily be misused – so always run the numbers.

 

 

 

Categories: Analytics, Statistics

Tagged as: , , ,

Leave a Reply