Tuesday, August 19, 2014

How do you know when someone is trying to deceive you with Statistics?

1) When they use metrics that sound good at first, but don't actually mean what they're trying to suggest.


For instance:

80% of all Camrys sold in the last 20 years are still on the road!

The metric suggests "oh yeah, they're really reliable," but think hard about it.

Maybe 80% of all Camrys sold in the last 20 years were sold within the last 10 years?  That doesn't say much for reliability, does it?  In fact, that would suggest that they don't make it past 10 years.

In any case, I don't mean to disparage the Camry, as I do think it is a nice car, but the metric has absolutely no meaning unless you have a lot more information.

Perhaps if they wanted to really show reliability, they would have put it as "80% of all Camrys 20 years or older are still on the road!"  Now that would say something about reliability.

2)  When they declare something effective without comparing it to the alternative of doing nothing, or without comparing it to alternatives.


The first example that comes to mind is vented ashtrays.  I remember reading a study about them.  They were made to help dissipate indoor smoke back when smoking indoors was popular.  An experiment was conducted to evaluate how effectively they cleared smoke from a room, and it was found that leaving the vents off the ashtrays turned out to be more effective than putting the vents on.  But the marketing team behind the product took the results that saidsure, the smoke clears out with these vents, but not having the vents is betterand cut it down to the smoke clears out with these vents.

Thinking about it further, it seems like comparing against "doing nothing" (placebo effect) is the gold standard for evaluating medical treatments, with a confidence result of p = 0.05 for a single-sided t test being enough to say "LOOK AT THIS IT IS EFFECTIVE!!!1!!!11!!!!"  What is not required is a comparison against cheaper alternatives, and that ought to be the gold standard -- benchmarking against competition instead of simply declaring effectiveness.

Leaving off information about alternatives is deceptive.  Seriously -- if you want to prescribe me Vicodin for my pain, it had damn well be more effective than Tylenol at making my pain tolerable, or else you have convinced me to waste money.  (incidentally, this was my experience when I got my wisdom teeth out.  Ibuprofen was just as effective at reducing the pain as Vicodin, but Vicodin was a lot more expensive and came with a 'high' that I really hated).

3)  Implying causation from correlation.  Period.

I'll just leave this here for you to laugh at.
Yarrgh, we be punishin' yer punishin of our sacred brotherhood by raisin' yer ocean temperchurs.  Long Live BlackBeard!

Same goes for debates on guns, drugs, education, government spending in general, and so, so much more.

4)  Any other omission of information.

I'll borrow some text from Richard Feynman here, emphasis mine.
Now it behooves me, of course, to tell you what they're missing [...] But there is one feature I notice that is generally missing in cargo cult science. That is the idea that we all hope you have learned in studying science in school--we never say explicitly what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It's a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty--a kind of leaning over backwards. For example, if you're doing an experiment, you should report everything that you think might make it invalid--not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you've eliminated by some other experiment, and how they worked--to make sure the other fellow can tell they have been eliminated.
From Cargo Cult Science, delivered to Caltech students at commencement, also included in Surely You're Joking, Mr. Feynman.

It's really easy to be deceived by statistics.

What's really hard is to not be deceived.  Seriously, any time someone uses statistics to back up their point, it's likely that there's some kind of deception in there, whether that deception is an intent to deceive you, or the person relaying it to you has been deceived, or if the person gathering the data has somehow deceived themselves (i.e. confirmation bias).

No comments:

Post a Comment