Probability fudging-drug companies with too many vaccinations, airplanes safer than cars

In this post I cover some examples of probability fudging which goes on around us daily. The basic tenet of this post is:
When probabilities of two things are low (less than 1%), the comparison between these two things loses confidence...it is a property of statistical distributions. If something is very rare, and something else is very very rare, it is difficult to compare them. You don't have much confidence in drawing conclusions.

HPV and rare disease Vaccinations

There is a lot of talk about Human Papillomavirus (HPV) vaccinations. Kids, especially girls, between the age of 10 to 15 years are recommended to get this vaccine. Let us look at the data, published by CDC here.

There are 79 Millions Americans (total population of about 300 Million) affected with HPV. There are 12 Million cases of new infections per year. Cancers attributed to HPV are 27000, of which 18000 are girls, the rest of 9000 being boys.

Since 1 in 4 Americans has the virus already, and 12 million get it every year, the virus itself can't be that bad. This is a classic case of measuring too much-if common bacteria presented in the mouth are measured, surely 1 in 4 have some particular "infection"...most Americans get on with their lives just fine, and HPV infection, even if they don't know about it, doesn't seem to be a big deal in everyday life. From this data, it is more common than the common cold virus-and you begin to wonder how (and why) they collected this data at the first place. But let's trust the data for a moment anyway.

The cases where you have severe effects (cancers) are interesting to us, 27000. It is a large number, but looking at the overall population of HPV infected people (79 Million), it is a very small percentage, about 0.0034%.   Only 3 in 10000 people are getting cancers attributable to HPV.

The pharma companies say we can eliminate this tiny percentage of people getting cancers by injecting them with the HPV vaccine. The side effects of thee vaccine appear on the same page: Out of 67 million doses of vaccine, 25000 people reported some side effects, and 2000 of these were serious. The serious side effects are about 0.00025%, or about 0.25 in 10000. We must remember that many people may not report a problem even if their child has some side effects, so this number 25000 is likely to go up.

At first instance, it will look like the benefit of the vaccine, eliminating 3 in 10000 cancers, is about 10 times better than the serious side effects of the vaccine (0.25 in 10000). But these are very small percentages, do we have good reliability of such measurements? No. The confidence level at these probabilities is very low, and a vaccine needs to be at least a 100 times effective (than non vaccine) for it to be not drowned in statistical error.

If the disease is rare, like this HPV-cancer, does it make sense to vaccinate at all? Obviously what's rare and not rare is subjective, but to me, a disease which will going to happen to 3 in 10000 is quite rare, and we should not hurry to vaccinate kids against it. Couple that with the problem with this specific case where there's no clear indication that the HPV is causing the cancer (they are confusing correlation with causality, and assuming that the HPV vaccine with prevent cancers 30 years from now, which is very speculative an assumption) and you see that HPV should not be a mandatory vaccine.

Look at the distribution of 1-x, not of x

When probabilities are very low, the right distribution to look at is the 1-x distribution. That is the statistical trickery these guys are doing to convince us of the wonderful effects of the vaccine. If x is small, less than 0.1% (1 in 1000), you must evaluate the risk of intervention causing more damage then the disease itself. Unless you have clear data that this does not happen, don't administer vaccines (or other procedures).

This is related to Bayes theorem and false positives in samples-when something is rare, the false positives (or random positives) will dominate the test results. One must consider the 1-x distribution in these cases.

Let us look at the 1-x distribution of the same data.

99.9964% will not develop a cancer related to HPV if they are not vaccinated.
99.99975% will develop a serious side effect if given a vaccine for HPV.

Everybody in their right mind can see that these are comparable numbers, and we do not need to vaccinate kids against HPV, because the risk of side effects is the in the same ball park as the benefit of taking the vaccine!

By focusing on the "x" distribution, the data is magnified; but in reality, the real distribution of interest is the 1-x, which is a stable distribution, and doesn't change much by vaccinating our children.

To make this more clear, let us look at other examples of daily life, where the 1-x distribution should be looked at, not the distribution of x.

Airplane and Car driving safety

Airplanes are safe, driving in cars is safe. Most people know this, and will take a plane or car ride from a place to another without thinking about safety-they will only worry about costs and the conveniences and inconveniences when comparing the two. However, you have all sorts of bad statisticians comparing airplane safety to car safety, and concluding that airplanes are safer (or unsafer) than driving. The error there is that the 1-x is the real distribution of interest: the probability of survival. That is maybe 99.95% over 10 years of car driving  to 99.99% for plane riding, and those two are similar, dont you think? One thing is 99.99% safe, the other is 99.95% safe...we can agree that they are both quite safe. Noone thought that planes are significantly safer or unsafer than cars. Both are safe, the 1-x is the real distribution.

When the benefits are doubtful, as is the case for HPV vaccines (that's why they are banned in Europe, a continent not exactly stupid), it gets even worse; you are putting unnecessary risk for these kids in vaccinating them.

But from statistics alone you can see that pharma companies are comparing the wrong distribution to sell their drugs.

Bayes theorem and false positives

These examples are closely related to Bayes theorem and false positives dominating the test results. When probabilities of something are small, the false positives in those tests will dominate the test results. Cervical cancer is present in many cases where HPV is not present-and that data is extremely important!