Determining statistical significance for online marketers doing A/B testing

September 16, 2013

We’re online marketers, not statisticians, so I’ve witnessed confusion from time to time when “statistical significance” enters the picture.

Statistical significance is a critical idea when it comes to A/B testing in the online marketing world because it distinguishes happenstance from real, tangible, actionable test results.

Whether you like it or not, as a marketer conducting an experiment, you’re suddenly part scientist. You have to prove the results of your experiment are worthy of further action and exploration.

Without further adieu — here’s what you should know about statistical significance as an internet marketer:

1. The statistical significance associated with your A/B test measures how confident you are that the results didn’t just occur by chance.

2. Statistical significance is described in confidence intervals. Typically, the “minimum” confidence interval to determine it’s very unlikely your test winner happened by chance is 95%.

3. If you can get a confidence interval of 97-99% you are in a very strong position to say your test results are not a coincidence.

4. Another way to think about what statistical significance means is this:

Your CEO: “How confident are you that the A/B test results here aren’t simply a coincidence?”

You: “I am 99% confident, statistically speaking, that it’s no coincidence.”

Boom. You are now empowered to sling the phrase left and right.

Here’s a really useful statistical significance calculator marketers can use.