I’ve been a Scientific American subscriber for many years but over the last year I’ve fallen way behind. Of late I’ve embarked on a frantic catch up and in the December 2016 issue (yes I’m *that* far behind) was an article called When Medical Tests Mislead.

It turns out that many lab developed tests for ovarian cancer in the U.S use positive predictive value (which we call precision in data science) as their marketing metric.

Now this would be ok except for the fact that the public isn’t really educated on these things and it’s easy to game the system. I did a quick bit of searching came up with the details and it’s appalling how wide spread this is.

The test referred to in the SA article was Ovasure which claimed a PPV of 99.3%. But it turns out that this one was based on a single study where the researchers already knew that 46% of the test subjects actually had ovarian cancer – because they *chose them*.

The article goes on to say that when independent biostatisticians recalculated the PPV using the actual 1/2500 probability of ovarian cancer in the general population of post-menopausal women the value was closer to 6.5%.

The FDA came down hard on the manufacturer and after various (ignored) warnings, the Ovasure test was eventually pulled from the market in 2008.

## Probably Misleading

So what does this mean in terms of probability? First some definitions:

True positives occur on a positive test result when the person has cancer (C):

False positives occur on a positive test result when cancer is absent:

Using the definition of conditional probability we can write:

So the precision (or PPV) becomes:

The probability of cancer given a positive test result is given by Bayes Rule:

And by the Law of Total Probability:

Combining the two:

Which is the same as the expression above for precision. In other words the claim was that if you got a positive result on the Ovasure test, you had a 99.3% chance of having ovarian cancer.

And many women pre-emptively had their ovaries removed as a result.

## Running the Numbers

Ok – so let’s use the above derivations to work out how they fudged the numbers and see if we can work out the actual PPV starting with what we know based on the setup:

Substituting these into the above equation and using some basic algebra to rearrange we can get a value for the efficiency of the test:

and therefore

But in reality they only did a single study and the probability of the incidence of cancer (0.46) was garbage as described above. So let’s use the actual probabilities from the real world:

And recalculate the actual positive predictive value for the Ovasure test:

So after using proper statistical methods we get a PPV of 6.2%.

## Discussion

We got pretty close to the 6.5% discussed in the linked resources above but we expect to be little out here because we don’t know how many people were actually in the study so we don’t know if the cancer probability was *exactly* 46% or just close (e.g a 46 to 55 cancer to control ratio would be 45.5%). Since we used this to derive a test efficiency based on the claimed PPV there’s a little room for error.

So there it is. Companies, driven by the profit motive use their marketing departments and poor experimental methods to mislead the public with bad statistics – no surprise there.

Fortunately these clowns were shut down years ago but you can bet it’s still happening today.

Like any tool, statistics are powerful and can easily be misused – so always run the numbers.

Categories: Analytics, Statistics