You know what statistical significance means…right? That’s how we figure out how many people need to take the survey for it to be quantitative. But they didn’t teach this concept in my engineering school. In the do-it-yourself world of web analytics, I am frequently making decisions where this matters, a lot. For example, in my Adword campaign, should I shut off one ad or keep them both running a bit longer?

Real world example:

**We’re running and A-B test of 2 Ads, to see which one gets more clicks.**

After 2 days, each ad has 1000 impressions.

Ad1 has 40 clicks, CTR=4%

Ad2 has 32 clicks, CTR=3.2%

Ad1 is clearly better, right? I mean, 1000 impressions is a lot. And Ad1 is almost a full percent higher than Ad2. It’s obvious, right?

Turns out the results are less than 1 standard deviation apart, and we cannot be confident that the CTRs are different.

If Ad2 had only 28 clicks for a CTR of 2.8%, then we could be 90% confident the CTRs are different.

OK, now after 10 days, each ad has 5000 impressions.

Ad1 has 200 clicks, CTR is still 4%

Ad 2 has 160 clicks, CTR is still 3.2%

Is this enough to say they are different? Yes, now they are over 2 standard deviations apart and we can be 95% confident that the CTRs are different.

In fact, even if Ad2 had 175 clicks for a CTR of 3.5%, we could be 90% confident that the CTRs are different in this case.

Wouldn’t it be nice to have a handy-dandy, free calculator for this?

Here’s a drop-dead simple statistical significance calculator tool, an excel spreadsheet by Brian Teasley, which I think you will find useful. I know I do.

For more on this, read

Excellent Analytics Tip#1: Statistical Significance by Avinash Kaushik