How to Split Test Effectively, Even With Low Traffic

Some websites feel like magic.

You arrive at their homepage looking for something and within a few clicks you’ve found what you were looking for… and a little more besides.

They’ve gotten you to click on a few other links, or add a few other items to your cart, almost before you knew it had happened.

Then you sit down to work on your own site and you just know users aren’t having that same smooth experience. But you want them to.

Well, those websites didn’t get that way on accident. And today we’re going to dive into a practice that can make the difference.

Split testing allows you to test two versions of a page, element, or site to determine which leads more visitors to achieve a goal (usually tied to an increase in revenue).

Those two options — an “A” test and a “B” test — are randomly served up to visitors so that half of your visitors get one version, and half of your visitors get another. By measuring what they do after arriving on that page, you can figure out which version is most successful, and then make that your new default.

For example, maybe you’re trying to determine if a red button or a blue button will lead more people to buy a widget. You could run a split test, showing a red button to half your traffic and a blue button to the other half, to see which results in more sales.

But tests are limited to buttons — you can also test copy, page design, images, testimonials or, in the case of a helicopter tour company, an entire site (though that’s not an approach I’d recommend… but more on that in a minute).

Where Most People Go Wrong When Split Testing

There’s a lot of information out there about split testing; after all, if there’s one thing marketers like to do, it’s create marketing content.

But, as you may know if you’ve dabbled with it in the past, a lot of the “best practices” out there probably won’t work for you.

Why?

Because most of that content was written by companies that have a million uniques a month or more… Which is actually a pretty small percentage of the sites out there on the internet.

Even businesses that are making a few million dollars year often don’t see that many visitors, and that means most of those best practices simply aren’t relevant for most sites.

So today I’m going to distill down my experience from the last 9 years, during which I’ve been involved in countless split tests for countless companies, and share best practices that work for “the rest of us” — that is, those of us with sites that aren’t Amazon or Google.

What to Split Test: Make sure it’s just right

A few years ago a helicopter tour company decided it was time to redesign its website. Normally, this wouldn’t be a big deal. Companies decide to redesign their sites all the time, after all.

The site hadn’t been updated in years and they wanted something more modern — something with a bit more flash and pizzazz. They hired a web design firm and spent the better part of a year and somewhere in the range of $20-30K hammering out the details for the new site’s design.

Finally the big day came, and they launched the site.

And, due to a mis-configuration (that is, totally on accident), half their traffic when to the old site and half their traffic was sent to the new site. Unfortunately, what they found was pretty tragic.

The new site didn’t convert nearly as well as the old site had. In fact, the old site was converting about 30% better.

It wasn’t pretty, but it worked.

There’s an important lesson to be learned here. Test too much, and you run the risk of investing too much time and money in a solution that ultimately doesn’t bear out.

But, it’s important to note that the opposite is also true.

Change too little and, without significant time or traffic, you won’t be able to see a measurable difference based on those changes.

It’s a fine line — but you want to change enough so that user actions are measurably different, while also being able to iterate quickly on those tests so you can implement those that are successful and move on.

For most businesses this comes down to two versions of a single page — not just an element on that page, but also not more than one page. Most companies can spin up a new page in relatively little time, and A/B testing a page allows for enough of a difference in results that those results can actually be measured. Then, once you’ve determined which version performs better, you can iterate on that page, testing smaller elements, if you’d like, to see which part of the page’s design is actually leading to better conversions.

After all, once you discover that a change can improve your bottom line, you’ll want to get it in place as soon as possible, or you’re missing out (literally) on an increase in profits.

What to Measure: Looking at the Results of a Split Test

Ultimately, when it came down to deciding whether or not the new helicopter tour company’s website was a  success, it came down to looking at sales numbers.

And that’s important to recognize.

All too often when looking at split testing marketers fall into the habit of measuring clicks or views instead of dollars. But sometimes that’s misleading.

For example, imagine that we have 1,000 visitors come to a sales page. Half of those visitors see version A of the page; half see version B.

Of the 500 people who see version A, 300 of them click on a button at the top of the page that says “Learn more.”

Of the 500 people who see version B, only 100 click on a button at the top of the page that says, “Get your copy now.”

But of the 300 who clicked on version A, only 10 people wound up actually buying the widget, while all 100 of those who clicked on version B made it all the way through the sales process.

If we were just measuring clicks, version A would have seemed much more successful than version B — but when we look at dollars, it’s clear that version B is the winner.

What page of your website do you wish performed better? Share an idea for how you could use split testing to make it more successful in the comments.