Testing, damned testing and Statistics

You have a hunch or hypothesis you want to test.  Or you need to prove your campaign has worked.  Or to challenge the current thinking. What do you do?

Statistical analysis – if done correctly – can prove/disprove a result. So the 2.3% response rate can be tested against the 2.1% RR for the control, and proved to be a significant improvement, or no different (depending on underlying volumes and significance levels.

But that’s the problem, “if done correctly”.  From what we see of others’ work, it often isn’t. Misleading conclusions drawn from indexes (for example, that an index of 102 is an over-representation; or that a segment can be encapsulated by a demographic with a high index, when that demographic accounts for only 5% of the segment), or business decisions taken on meaningless results.

That’s when you need to get your data fit for purpose, know how to test your data, and have tools to test your data.  We use SPSS or SAS (although Excel also handles small data-sets well), and T-tests, ANOVA, indexes and z-scores, Chi2, Cramers V, correlation coefficients.

So prove that hunch, test that hypothesis, dispel that myth, and prove your campaign worked.  All with statistics.

Skip to content