21 Mar 12 5:41 am
I try to keep A/B split tests going constantly on all my own sites (or paid marketing projects for my clients). And Google Website Optimizer is the tool I've always used, both because it's free and because it seems well-made and useful.
The biggest problem I always have with GWO is really a limitation of statistics, not Google's fault. That problem is that you can't trust statistical results until you have lots of samples to analyze.
Having enough samples to be able to believe your results with x% amount of "reality" is called the "Confidence Level". Foolish marketers stop their split tests too soon and then adopt the wrong result into their marketing. Google seems to try to protect us from this by showing the confidence level (more often the lack of one) in your GWO reports - you must take that seriously.
In one GWO A/B split test of an RR (Radical Redesign) page compared to my current live page, the winner actually flipped several times from the one page to the other, before there were enough samples to get to a 90% confidence level!
Here's my problem - at least as a vendor. The simple "Single Factorial" split test compares 2 pages, sending 50% or your visitors to one and 50% to the other. Then it gathers statistics on the results (for a vendor, usually the sales) for each page.
It takes several hundred results from each page to reach a decent confidence level.
But when I start a new website or blog, it takes weeks or months to get enough sales for a high confidence split test.
A "multi-factorial" split test (where you're comparing more than 1 pair of things) takes even more visits. Who can wait for months (with 50% of your traffic going to a page which must (by definition) be inferior - to find out which page sells better)?
I guess Affiliates have it a bit better. They don't have to measure sales, but only visitors who click on their hoplink - a bigger number.
The only solution I've found for this low traffic thus low sales dilemma is not a very good one. The way you're expected to compare pages in your split test is by how many visitors end up on your "Thank You" page, or other page representing a completed sale.
Instead, I use a "proxy result" to represent a potential sale. That is, I test for something like how many visitors arrived at a detailed page about the product, a free sample chapter, a page of product specs or some such. I figure that at least these people have a real interest in my offering. And there are a lot more of those than completed sales, speeding up the split test.
I suppose one can just keep split tests going constantly and accept the loss of some sales to the inferior page and also accept the long time required per test. Since they are automatic, quick and easy to set up (and provide such valuable data) why not just go ahead and run them constantly? Nothing wrong with constant improvement of your pages, based on scientific marketing, right?
If anyone knows a solution to this problem of months long split tests for low traffic volume sites, I'd sure love to hear it!